The fediverse is a privacy nightmare

ActivityPub, the protocol that powers the fediverse (including Mastodon – same caveats as the first two times, will be used interchangeably, deal with it) is not private. It is not even semi-private. It is a completely public medium and absolutely nothing posted on it, including direct messages, can be seen as even remotely secure. Worse, anything you post on Mastodon is, once sent, for all intents and purposes completely irrevocable. To function, the network relies upon the good faith participation of thousands of independently owned and operated servers, but a bad actor simply has to behave not in good faith and there is absolutely no mechanism to stop them or to get around this. Worse, whatever legal protections are in place around personal data are either non-applicable or would be stunningly hard to enforce.

This is something I touched on in my Meta piece, but I think it bears reiterating in full as a blunt, stark message for anyone who needs to know this:

If you have any objection at all to your posts and profile information being potentially sucked up by Meta, Google, or literally any other bad actor you can think of, do not use the fediverse. Period. Even if your personal chosen bogey-man does not presently suck down every single thing you and your contacts post, absolutely nothing prevents them from doing so in the future, and they may well already be doing so, and there’s next to nothing you can do about it.

To illustrate why, we need to understand exactly how the fediverse works.

How fedi works

Mastodon and the fediverse are not on a single server. Instead, people create servers of their own, called “instances” (again, I will use “instance” and “server” interchangeably here – they are functionally the same thing). These all communicate with each other using a protocol called ActivityPub.

Rather than registering “a Mastodon account”, you register an account on a server, for example This server then lets you access the posts of, and communicate with, users on any other server (in theory – in practice this is a bit more complicated). In total, there are more than 15,000 servers operating at any one time.

Anyone can start a server. There is no prequalification required and there are various different server packages that you can use. People can, and do, start servers on their own home computers, or pay $20 or so a month to Linode or whoever to run a server for them. All you need is the domain name, some Linux experience and maybe a few hours to get everything set up. There are also plenty of managed hosts for servers that make it, essentially, a few clicks. There are fediverse server packages that some guy can run on a Raspberry Pi. There are massive servers like which are run by large non-profits, and smaller ones like the one I used to run which are just run by individuals for them and a small group, and even smaller ones which are run by one person for one person. But functionally – anyone can start a server.

The “federated” bit of the fediverse is that posts are distributed to followers on different servers via ActivityPub. If I have an account,, and I am followed by, whatever I post from that point on will not only be stored on, but also on The server software on will start making copies of any posts I make, essentially forming a duplicate of my account’s content from the point of following.

But in practice, most people with any significant number of followers will have followers on maybe fifty or so different servers. So every time you post, your posts are automatically distributed to those fifty servers, who each retain a full copy of your posts and (usually) any media you post along with them. If someone on one of those servers “boosts” one of your posts, it will get redistributed to their followers, and saved on their followers’ servers.

If you delete a post on Mastodon, your server will send an ActivityPub delete command to any servers that it knows have copies of the post, which instructs them to delete any copies of them. If your Mastodon server shuts down, its admin should run a mass delete command which tells all servers it has ever contacted to delete all your posts and everything about you. But there is nothing compelling them to do so.

The privacy problems

Anyone can run a server, and they don’t have to be identifiable

A server that retains your posts could be run by anyone. It could be like, in which case it is run by a registered German non-profit, with an identifiable leader. It could be like Meta’s putative Threads, in which case it is run by a US-domiciled public corporation. Or it could be like my old server, in which case it is run by some rando who does not use their real name ever. Or it could be a Pleroma instance run on a Raspberry Pi by someone who really, really likes Hitler. Or it could be whatever the fuck this is. You have no way of knowing.

As soon as a post reaches another server, they control it totally

As soon as you make a post on Mastodon, and someone else on a different server reads it, that server has downloaded a copy of whatever it is your posted, along with your profile information.

The Mastodon software will offer you the option to delete a post, however while this definitely deletes the post from your server, it does not necessarily delete it from other servers; it only sends a command requesting that those servers delete it. The servers do not have to honour this and there is no feedback mechanism to see whether it was successful, so you’ll never know if it was or not. A server can simply disregard the message, and retain – and possibly even redistribute – copies of your posts for as long as it likes. The upshot of this is that whatever you post on an ActivityPub service should, for all intents and purposes, be considered irrevocable. There is no means to categorically enforce the deletion of your content or your profile information from the entirety of the fediverse. Period.

If your server closes down, and does not run the “self-destruct” command that tells all servers it has ever federated with to delete all record of it, its users and its posts, then they will stay on those servers indefinitely with no simple means of deleting them, or even knowing that they are there. And that’s assuming that the other servers would have honoured that deletion request anyway. Again, a bad actor doesn’t have to. Mastodon includes this self-destruct command – many other fediverse software packages do not.

Worse still, different server packages do not have to even respect privacy settings on your post, or even whether you have blocked someone at an account level. It can and has happened that bad actors have run modified versions of server software (most famously Pleroma and its offshoots) that can evade and ignore account blocks. Sent a “DM” to someone, or tagged it “followers only”? Plugins exist to automatically convert these, on the receiving end, to public posts that are viewable by anyone. This isn’t something hidden, or theoretical – it is publicly advertised and freely admitted to.

There is simply no way you can really avoid this. A system that works based solely on voluntary and mutual agreement, without any real compulsion for anyone to abide by what is expected of them, can be easily defeated by someone simply ignoring the expected norms and doing whatever they please. There is no central enforcement authority that mandates that servers have to implement the whole ActivityPub spec, or respond to requests in the expected way, and a server that does not do so has no means of being completely booted off the network.

Server blocking cannot fully mitigate this

Plenty of fediverse servers can, do and indeed should block some of the most egregious bad actors out there, but it is a constant game of whac-a-mole on the part of server administrators to try and continually block servers run by bad actors – in which time, those servers can still participate fully in the network and read – and save – whatever posts they feel like. Even after they are blocked, however, there are means by which they can still see, download and keep the posts of users on servers that have blocked them.

Say you are on server A, and you have a contact, Mike, on server B. Server C is a shithole server full of harassers and Nazis. Server A blocks server C, but server B does not, for whatever reason. Someone on server C, Adolf, follows Mike. Server C cannot see your posts directly… but if Mike boosts one, Adolf can see it, so server C can download and keep it, and will display it to all of Adolf’s friends on server C via the Federated feed. And because server C then knows about both server A and your account specifically, in most cases it can start making API requests to your server, anonymously, to get your account information and any other public posts you have.

You might well not know about this. Why would you? You’re a user of server A. You don’t necessarily know who or what server B blocks or doesn’t block. You may not even know that server C exists or who Adolf is. But all it takes is someone to put a post in server C’s eyeline and they can take it and keep it, and then ignore any and all requests to delete it. Meanwhile, Adolf and his friends Rudolf and Hermann can have a lovely little laugh at your expense in your replies… on their server. Which your server blocks, so you will never see it or know about it. But Servers D, E and F that are also blocked by your server, but not by C, can also join in.

Again, this is not theoretical. This happens. A lot. It reduces the value of defederation as a means of ensuring user security from bad actors by making you only as cut-off from shithole servers as the least cut-off server that boosts your posts. Over how many degrees of separation are you personally willing to vet your server’s block list to ensure that a boost of a boost of a boost of a boost doesn’t get you a starring role on the bad side of fedi?

Your server can, at least, turn on “authorised fetch”, which mitigates this issue somewhat by ensuring that servers can’t read posts without identifying themselves, allowing server blocks to be properly enforced. But authorised fetch also breaks various other bits of Mastodon functionality, not least by completely locking down the server and all posts on it from everyone who isn’t an authorised server, not just bad actors. There’s some discussion on this here. Given that Mastodon already has UX issues with discoverability and reply chains appearing “broken”, and that this makes an already unwieldy and resource-heavy server even more resource-intensive, it’s not on by default, and it’s not surprising that a great many servers do not enable it.

Oh and there’s also no obvious way to see that a server does have it enabled, as far as I can tell. I can’t even find out if does, but as far as I can tell it doesn’t. No worries though, they’re only literally the biggest Mastodon server on the planet.

Your posts are only as secure and private as the least secure and private server that can see them

These servers, and their treatment of your posts, are also subject to the security consciousness – or lack thereof – of their administrators., an anarchist instance, gave everyone a stunning demonstration of this fact when one of their admins was raided by the FBI, while they were holding on their personal computer an unencrypted copy of the Kolektiva database.

This database does not just include posts made by Kolektiva’s users on the public network; it also includes posts viewed by any of Kolektiva’s users, posts from people followed by users on Kolektiva, posts boosted by Kolektiva’s users, and every direct message sent to and from users on Kolektiva’s servers. Given the ideological leanings of Kolektiva’s user base, it can be reasonably surmised that there are things in that database that are incriminating, or that might be of value or interest to law enforcement.

Quite rationally, a lot of Kolektiva’s users (who naturally are already not going to be especially trusting of the state) might feel exposed by this and want to delete their posts from Mastodon. Unfortunately for them, they are subject to the above proviso regarding deletion; all they can do is send out “please delete this” messages and hope that the other servers comply with them. If they don’t, they are functionally shit out of luck. I really don’t mean to scare them, but the FBI are possibly the least of their problems here.

This boneheaded approach to data security is only notable because it’s one of the few we know of, because Kolektiva (and they should get credit for this) readily owned up to it. How many other servers have been compromised or had the computers with their databases seized? And in what jurisdictions? How many servers hold your posts without you knowing about it? And what stupid clownish things are they doing with them? You simply have no way of knowing. But your posts are only as secure and private as the least secure and private server that has them.

You don’t even need to go outside the spec to suck down everyone’s posts

To add to the fun, Mastodon so not a private platform that it is absolutely trivial for someone to simply set up an account on a big well-federated instance, like, grab an API key, spend ten minutes writing a Python script that yanks every post it can from the Federated feed of that instance, suck down as much raw post data from the fediverse as they please and do whatever they like with it. Their rules don’t forbid this; neither does their privacy policy. And that “someone” is likely to be a pseudonymous private individual, completely unidentifiable to anyone.

While everyone has been running around in headless chicken mode about Mark Zuckerberg coming to hoover up your toots, they’ve failed to notice that there’s frankly far worse and more immediately harmful people that could be doing so, already are doing so, and doing so without anyone even noticing or knowing – and in a way that is, for all intents and purposes, completely authorised by the server operators.

The upshot

The end result is this:

  • Posts on Mastodon are immediately distributed to potentially hundreds of different servers.
  • As soon as a server holds your post content and profile information, it is subject to their information security practices, and they can redistribute it, potentially without you ever knowing.
  • For a great many servers, you have no real way of knowing who owns them or how competent they are.
  • You can only request that other servers delete your posts via the Mastodon software. You cannot enforce their deletion.
  • Other servers do not have to respect the privacy settings of your posts. The only mechanism to stop them is for your server to block them, which only works for future posts and even then only imperfectly.
  • If another server does not delete your posts when asked, you are stuck resorting to legal means to try and compel this… with hundreds of different servers all around the world run by people over whom you have no leverage, who may not even care to respect data privacy rules, and with which you have no relationship – and who you may not even know about.
  • To reiterate: you absolutely should not post anything on the fediverse or Mastodon that you are not comfortable with being archived permanently by the absolute worst people you can think of.


To preface, I am not a lawyer, or someone who particularly knows much about GDPR except on a surface level – but then I could guarantee that most server owners aren’t either, and they’re the ones to whom it is probably most pertinent. And it is worth discussing the legal framework that applies.

Servers can be based in any jurisdiction. Data privacy rights in the EU and the UK are governed by the General Data Protection Regulations (GDPR). The US and many other jurisdictions have other, weaker protections. But generally, the privacy protections of your server’s home jurisdiction apply to your posts on that server. If your server is based in the UK, UK GDPR applies. If it is anywhere else in the world, but you are in the UK or EU, then theoretically GDPR applies by virtue of the data being processed being that of someone subject to GDPR. If it is in the US, and you are in the US, well, good luck.

Under GDPR, any site you use should have a privacy policy that describes what it does with your data. Here’s‘s, something which a great many other instances crib from. It makes a note that you can delete your account, and that you can delete your posts, but does not mention that this is conditional on other servers respecting that deletion. This feels like a significant omission.

At least has a privacy policy. There are plenty of small, single person servers which do not. You can guarantee that the Hitler-loving Pleroma guy doesn’t. Indeed, under GDPR, if he’s just running that Pleroma for himself, he doesn’t have to.

GDPR does confer significant rights of deletion of information, and rights to direct how your data is processed, or whether it should be processed at all. But the problem with this is enforcement. How do you serve legal papers on a person who is potentially fictitious, in a jurisdiction halfway around the world? How would a server owner even know what servers their users’ posts are hosted on, anyway, in order to be able to commence the exhaustive process of trying to locate contact details from all of them?

How does this even work in a GDPR context, anyway? Does a Mastodon server act as a “controller” that directs the other servers that process its posts, or is it just a “processor”… or both at once? If I post on and my post gets syndicated to a different server, who is responsible for that? Am I a “user” of the other server and thus gain GDPR rights over it no matter where it’s located jurisdiction-wise, or is that server a “processor” directed by my server, the “controller”? Can I raise a subject access request against them to get my data? If they tell me “no, I won’t erase it”… what then?

The fediverse is alien to, and predates, the concept of GDPR, which was created envisaging scenarios in which disparate consumers acting as natural people engaged with recognisable legal entities which engaged with other recognisable legal entities in ways governed by binding contracts and legally-enforceable terms and conditions. Mastodon doesn’t have that. Mastodon is a very loose informal association that just sort of… happens, with a melange of natural people interacting with corporations, unincorporated associations, non-profits and other natural people, all of whom could be considered data subjects, data processors or data controllers in their own right. GDPR wasn’t even a thing when Mastodon started, and what “terms of service” exist for users are more along the lines of “don’t be a racist, transphobic shithead” than anything firmly contractual. It’s all remarkably informal for something that hosts what is deemed to be the personal data of millions of people across numerous jurisdictions.

As far as I can tell there is no actual settled answer to all of this and nobody is particularly exercised about finding one. This is partially because the fediverse is so small fry in the scheme of things, and the infrastructure so atomised, that it’s deemed to “not really matter” in the same way that a local cupcake shop’s email marketing doesn’t really matter to national privacy regulators. It’s also partially because, I think, everyone is aware of what a massive can of worms would be opened if anyone decided to look into things too deeply.

The situation is at least a lot more clearcut for users who wind up subject to jurisdictions that don’t have GDPR or equivalent legislation: you are on your own. Have fun!

I’m not the only person to have noticed this but I don’t think it has really got as much attention as it should, because there’s some serious issues brewing here; all it will take is someone who discovers that their PII is being held by one Pleroma Nazi too many and decides that they want to start enforcing their data protection rights in good faith, and the house of cards will start to crumble. That your rights of deletion over your data on Mastodon are exceptionally contingent is not something that is spelt out by anyone, or that anyone particularly wants to confront; again, I think mainly because everyone is scared to open that particular can of worms because it would not end well. Everyone was up in arms that “the admins can read your DMs” was a stupid criticism of Mastodon, but failed to notice (or, as I would suspect, rather not talk about) the massive elephant in the room; that quite a few other people can read your DMs too, as can anyone who happens to gain access to the servers of anyone who you’ve DMed. Which, now in at least one instance (pun not intended), includes the FBI.

My worry about Mastodon and data privacy is not, at the end of the day, the prospect of Meta having all my posts. Meta is identifiable. I can send them a subject access request to find out what data they hold on me, and I can complain about them to the Information Commissioner’s Office, and I can sue them to enforce my rights, under laws I can identify and know exactly how they apply to me and what both my and Meta’s role is in the equation. They have identifiable office holders, and registered offices, and a legal team, and a data protection officer, and a privacy policy, and the EU ready to smack them with a sledgehammer should they put a foot wrong (as they have done and assuredly will do in future). Even on a more base level than that, speaking to the more real and present threats that a lot of people on the Internet face: Meta isn’t going to swat me, or doxx me, or send me death threats, or try to ruin my life.

In that context, I am not at all worried about Meta on fedi, really. There are things I can do about Meta, and the worst they can reasonably do to me is try, fruitlessly, to advertise to me.

My worry is the guy with the Raspberry Pi.

15 responses to “The fediverse is a privacy nightmare”

  1. I wonder if you could spin out some examples of scenarios where this is a problem. If I assume everything I publish is out there forever, how could that be a problem for me? Admittedly I don’t ever publish personal things on Mastodon.

    Are there situations where vulnerable people could be put at risk? One of the things that Mastodon has tried to do is protect people from harassment.

    1. So, the key issue actually is harassment. It is possible for a bad actor server to coordinate harassment against someone, in the replies under their post, without that person ever knowing, purely by a quirk of how ActivityPub operates. It is also possible for unlisted or follower-only posts to be remotely converted into publicly-viewable ones.

      By no means is a user on Mastodon any more safe from harassment than anyone else on any other service, except perhaps by obscurity. There is nothing intrinsically protective about it, and a great deal which is actively enabling.

      The other issue is that people change. You may not necessarily want to be associated with something you said twenty years ago. While on other platforms what you say can be archived on e.g., the actual surface area of this is pretty minimal. If e.g. Twitter retains my posts when I don’t want it to, I can write to them and invoke my GDPR rights, because they are identifiable. If archives my stuff, I can identify them and contact them. That is at best a massively onerous task with Fedi, which automatically distributes and archives beyond your control anything that you put out. This strongly conflicts with e.g. the right to be forgotten inherent in GDPR.

  2. The post has some fundamental errors.
    Activitypub is not the only protocol that exists within fediverse
    The failures with mastodon are true, but not applicable to other systems such as diaspora, friendica, hubzilla

    1. Cool beans, but the ActivityPub based sites are the vast majority of the Fediverse, and frankly the Fediverse isn’t that big to begin with. You are talking about also-rans within also-rans.

      I’m sure your chosen solution would mitigate this, but next to nobody is using it. The only Fediverse protocol with any actual adoption is AP.

  3. Sites like and the wayback machine ( are commonly used specifically to save posts on twitter and the like in case theyre deleted. The wayback machine already automatically crawls many accounts, and these archives (WARC, WACZ, signed WACZ) can be redistributed.

    Nothing stops people from sharing screencaps of private posts or DMs made on sites like facebook either.

    Am I missing something? These issues already exist in traditional social media.

    1. Last time I checked, archiveorg doesn’t push everything you post to anyone who asks. Archiveorg doesn’t allow you to effortlessly boost a site to your following. Archiveorg doesn’t allow you to comment on a website

      1. My point is that the means to preserve and make not-private already exist and are used. You just have to share a link to or a screencap if you want to spread deleted/private content to your followers, allow comments and the like.

        1. I think most people could quite comfortably see a distinction between “someone can screenshot my posts” and “my posts can be (effectively) automatically simultaneously screenshotted by a thousand different people at once and kept in a thousand searchable databases forever, by anonymous people who I don’t even know exist, and potentially against my express wishes, and also everyone keeps saying that actually this is private and only I control my information when that is obvious bollocks”.

          It’s one thing to have a post you made screenshotted, it’s another to have that process mechanised and easily weaponised against you in ways that nobody seems to fully disclose.

          1. I’m not convinced. The distinction doesn’t help: you have to intentionally use software that ignores delete requests or that makes private posts public. This mean the privacy risk only caused by the few software run by bad actors, you can’t talk of the thousands of general servers replicating our content, they arent a threat. This puts the privacy risk on the same level as people archiving posts or screenshotting private posts from twitter etc.

            What’s going in my head, more concretely: If you go by pseudonyms and post nothing personal, it won’t matter to you if your posts are saved forever on a server somewhere. If you go by your real identity and post on your real life, then you must assume someone out there could have saved them. That’s really what posting in public means. So I don’t see any privacy issue here to begin with.

    2. Exactly. This article is situational click-bait. Another big exodus from Twitter this past weekend. So this garbage is brought up. If you don’t want it on the internet, do put it on the internet!!! Fediverse is NO different (or worse) than any other site. In fact, because you can run your own “private” instance, it is actually MUCH better. If you don’t feel safe on the Fedi, then you don’t feel safe anywhere and that is just a sad existence. Maybe it’s time to step away from the computer.

  4. There are interesting points made here — but what makes this any different from someone scraping their Twitter timeline? when twitter had a (realistically priced) api, you could definitely just set up a cronjob to pull down tweets matching a search term, your follows, etc and store it in a database. The only mercy you’d have is twitter noticing someone kept requesting their follows over and over again, which is a legitimate use case used by bots and third party clients

  5. It’s a bit worse than that.

    Anyone can subscribe to your RSS feed and get a forever copy of your public and unlisted posts. Most RSS readers offer a convenient full-text search, even. It’s a feature. No server required. Obviously RSS doesn’t have deletion requests.

    There is no way to turn off this feature unless you’re on Hometown or glitchsoc mastodon or gotosocial.

  6. In other words, posting on the Fediverse is just like posting or commenting on a blog.

  7. Without economic settlements baked into the protocols none of these networks are sustainable. Settlements provide the incentives and disincentives to interoperate, balance risk and provide a mechanism to distribute value. Settlements should reflect marginal cost of clearing supply and demand ex ante and be structured such that the larger actor pays the smaller actor and slightly (and proportionally) higher settlement to generate greater network effects. Furthermore, fully distributed systems are inherently wasteful and insecure. Well functioning networks and internetworks have layer 1-3 tradeoffs driven by actions in layers 5-7 to clear supply and demand efficiently across a broad array of contexts and solutions. Until fediverse actors/developers embrace these principles there is little likelihood of long-term sustainability and generativity.

  8. Nobody loves bloonface Avatar
    Nobody loves bloonface

    The author is either a moron for not understanding the internet, or a terrible person for fear mongering.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.