I have deeply mixed feelings about #ActivityPub's adoption of JSON-LD, as someone who's spent way too long dealing with it while building #Fedify.
-
I have deeply mixed feelings about #ActivityPub's adoption of JSON-LD, as someone who's spent way too long dealing with it while building #Fedify.
Part of me wishes it had never happened. A lot of developers jump into ActivityPub development without really understanding JSON-LD, and honestly, can you blame them? The result is a growing number of implementations producing technically invalid JSON-LD. It works, sort of, because everyone's just pattern-matching against what Mastodon does, but it's not correct. And even developers who do take the time to understand JSON-LD often end up hardcoding their documents anyway, because proper JSON-LD processor libraries simply don't exist for many languages. No safety net, no validation, just vibes and hoping you got the
@contextright. Naturally, mistakes creep in.But then the other part of me thinks: well, we're stuck with JSON-LD now. There's no going back. So wouldn't it be nice if people actually used it properly? Process the documents, normalize them, do the compaction and expansion dance the way the spec intended. That's what Fedify does.
Here's the part that really gets to me, though. Because Fedify actually processes JSON-LD correctly, it's more likely to break when talking to implementations that produce malformed documents. From the end user's perspective, Fedify looks like the fragile one. “Why can't I follow this person?” Well, because their server is emitting garbage JSON-LD that happens to work with implementations that just treat it as a regular JSON blob. Every time I get one of these bug reports, I feel a certain injustice. Like being the only person in the group project who actually read the assignment.
To be fair, there are real practical reasons why most people don't bother with proper JSON-LD processing. Implementing a full processor is genuinely a lot of work. It leans on the entire Linked Data stack, which is bigger than most people expect going in. And the performance cost isn't trivial either. Fedify uses some tricks to keep things fast, and I'll be honest, that code isn't my proudest work.
Anyway, none of this is going anywhere. Just me grumbling into the void. If you're building an ActivityPub implementation, maybe consider using a JSON-LD processor if one's available for your language. And if you're not going to, at least test your output against implementations that do.
#JSONLD #fedidev
@hongminhee @jalefkowit huh. I’ve been pondering using it for some projects of mine, so this is good to know.
Is it a fundamental problem with JSON-LD, such that it should just be avoided, or a problem with how ActivityPub uses it?
And is there something else you’d recommend that fulfills the same goals?
-
@sl007 @julian I admit I didn't pay attention to immers at the time - I don't play games, not even chess. I was just using chess as an example, didn't mean to trigger anyone's trauma!
Still, it kinda proves my point. You have to use standard AS vocabulary because Mastodon, and if you squint then sure,
TravelandArrive, why not? But given some of the conversations I've seen on this forum, I shudder to think how that would go down if you tried to get approval for that usage from "the community" first.@mat
Just btw, this is 7 years old https://www.reddit.com/r/chess/comments/94ubnd/chess_over_activitypub/ but anywayHowever, given that I have, including immers and redaktor, at least 3 apps where I can use the first chess spec.:
if more than 2 implementations will also support this second chess specification, I will do so too. -
@hongminhee @jalefkowit huh. I’ve been pondering using it for some projects of mine, so this is good to know.
Is it a fundamental problem with JSON-LD, such that it should just be avoided, or a problem with how ActivityPub uses it?
And is there something else you’d recommend that fulfills the same goals?
@lkanies@hachyderm.io @jalefkowit@vmst.io To be honest, I'm not too sure myself. I just know that JSON-LD was originally planned as a foundation for the Semantic Web. I can only guess that if ontology is useful in a certain area, then JSON-LD would probably be useful there too.
-
I have deeply mixed feelings about #ActivityPub's adoption of JSON-LD, as someone who's spent way too long dealing with it while building #Fedify.
Part of me wishes it had never happened. A lot of developers jump into ActivityPub development without really understanding JSON-LD, and honestly, can you blame them? The result is a growing number of implementations producing technically invalid JSON-LD. It works, sort of, because everyone's just pattern-matching against what Mastodon does, but it's not correct. And even developers who do take the time to understand JSON-LD often end up hardcoding their documents anyway, because proper JSON-LD processor libraries simply don't exist for many languages. No safety net, no validation, just vibes and hoping you got the
@contextright. Naturally, mistakes creep in.But then the other part of me thinks: well, we're stuck with JSON-LD now. There's no going back. So wouldn't it be nice if people actually used it properly? Process the documents, normalize them, do the compaction and expansion dance the way the spec intended. That's what Fedify does.
Here's the part that really gets to me, though. Because Fedify actually processes JSON-LD correctly, it's more likely to break when talking to implementations that produce malformed documents. From the end user's perspective, Fedify looks like the fragile one. “Why can't I follow this person?” Well, because their server is emitting garbage JSON-LD that happens to work with implementations that just treat it as a regular JSON blob. Every time I get one of these bug reports, I feel a certain injustice. Like being the only person in the group project who actually read the assignment.
To be fair, there are real practical reasons why most people don't bother with proper JSON-LD processing. Implementing a full processor is genuinely a lot of work. It leans on the entire Linked Data stack, which is bigger than most people expect going in. And the performance cost isn't trivial either. Fedify uses some tricks to keep things fast, and I'll be honest, that code isn't my proudest work.
Anyway, none of this is going anywhere. Just me grumbling into the void. If you're building an ActivityPub implementation, maybe consider using a JSON-LD processor if one's available for your language. And if you're not going to, at least test your output against implementations that do.
#JSONLD #fedidev
@hongminhee do you use the activitystrea.ms module from npm? It takes a lot of the pain out.
-
I have deeply mixed feelings about #ActivityPub's adoption of JSON-LD, as someone who's spent way too long dealing with it while building #Fedify.
Part of me wishes it had never happened. A lot of developers jump into ActivityPub development without really understanding JSON-LD, and honestly, can you blame them? The result is a growing number of implementations producing technically invalid JSON-LD. It works, sort of, because everyone's just pattern-matching against what Mastodon does, but it's not correct. And even developers who do take the time to understand JSON-LD often end up hardcoding their documents anyway, because proper JSON-LD processor libraries simply don't exist for many languages. No safety net, no validation, just vibes and hoping you got the
@contextright. Naturally, mistakes creep in.But then the other part of me thinks: well, we're stuck with JSON-LD now. There's no going back. So wouldn't it be nice if people actually used it properly? Process the documents, normalize them, do the compaction and expansion dance the way the spec intended. That's what Fedify does.
Here's the part that really gets to me, though. Because Fedify actually processes JSON-LD correctly, it's more likely to break when talking to implementations that produce malformed documents. From the end user's perspective, Fedify looks like the fragile one. “Why can't I follow this person?” Well, because their server is emitting garbage JSON-LD that happens to work with implementations that just treat it as a regular JSON blob. Every time I get one of these bug reports, I feel a certain injustice. Like being the only person in the group project who actually read the assignment.
To be fair, there are real practical reasons why most people don't bother with proper JSON-LD processing. Implementing a full processor is genuinely a lot of work. It leans on the entire Linked Data stack, which is bigger than most people expect going in. And the performance cost isn't trivial either. Fedify uses some tricks to keep things fast, and I'll be honest, that code isn't my proudest work.
Anyway, none of this is going anywhere. Just me grumbling into the void. If you're building an ActivityPub implementation, maybe consider using a JSON-LD processor if one's available for your language. And if you're not going to, at least test your output against implementations that do.
#JSONLD #fedidev
@hongminhee I agree that new developers should use a JSON-LD processor. It saves a lot of heartache.
-
@hongminhee do you use the activitystrea.ms module from npm? It takes a lot of the pain out.
@evan@cosocial.ca I don't remember exactly, but I think I came across it while doing research before developing Fedify. I probably didn't use it because the TypeScript type definitions were missing. In the end, I ended up making something similar in Fedify anyway.
-
@silverpill I'm sorry, I'm not aware of that and I thought I read the specs pretty thoroughly. Could you point me in the right direction for where you got this information from?
-
I have deeply mixed feelings about #ActivityPub's adoption of JSON-LD, as someone who's spent way too long dealing with it while building #Fedify.
Part of me wishes it had never happened. A lot of developers jump into ActivityPub development without really understanding JSON-LD, and honestly, can you blame them? The result is a growing number of implementations producing technically invalid JSON-LD. It works, sort of, because everyone's just pattern-matching against what Mastodon does, but it's not correct. And even developers who do take the time to understand JSON-LD often end up hardcoding their documents anyway, because proper JSON-LD processor libraries simply don't exist for many languages. No safety net, no validation, just vibes and hoping you got the
@contextright. Naturally, mistakes creep in.But then the other part of me thinks: well, we're stuck with JSON-LD now. There's no going back. So wouldn't it be nice if people actually used it properly? Process the documents, normalize them, do the compaction and expansion dance the way the spec intended. That's what Fedify does.
Here's the part that really gets to me, though. Because Fedify actually processes JSON-LD correctly, it's more likely to break when talking to implementations that produce malformed documents. From the end user's perspective, Fedify looks like the fragile one. “Why can't I follow this person?” Well, because their server is emitting garbage JSON-LD that happens to work with implementations that just treat it as a regular JSON blob. Every time I get one of these bug reports, I feel a certain injustice. Like being the only person in the group project who actually read the assignment.
To be fair, there are real practical reasons why most people don't bother with proper JSON-LD processing. Implementing a full processor is genuinely a lot of work. It leans on the entire Linked Data stack, which is bigger than most people expect going in. And the performance cost isn't trivial either. Fedify uses some tricks to keep things fast, and I'll be honest, that code isn't my proudest work.
Anyway, none of this is going anywhere. Just me grumbling into the void. If you're building an ActivityPub implementation, maybe consider using a JSON-LD processor if one's available for your language. And if you're not going to, at least test your output against implementations that do.
#JSONLD #fedidev
@hongminhee I have the same feeling. The idea behind JSON-LD is nice, but it isn't widely available, so developing with it becomes a headache: do I want to create a JSON-LD processor, spending twice the time I wanted to, or do I just consider it as JSON for now and hope someone will make a JSON-LD processor soon? Often, the answer is the latter, because it's a big task that we're not looking for when creating fedi software.
-
@silverpill aaah, I see. I think we've had this discussion before (or at least I had it with someone else).
For me "SHOULD" falls in the category of the robustness principle: "be conservative in what you do, be liberal in what you accept from others".
So for me if you treat "SHOULD" in a spec as non mandatory you haven't really implemented the spec.
-
@silverpill regarding size, ActivityPub is such a verbose protocol that the hundred or so of raw bytes you save through omitting context, are most likely negligible through the prism of connection compression. So to me that's not entirely a "valid reason".
And as developer myself, I think that contexts, even in a non valid JSON-LD implementation, offer enough guidance for building a data vocabulary for them to have plenty of value.
Do you propose we replace contexts with Open API specifications, or how do we coordinate what's a valid vocabulary data object in a federated network? And how do you propose that others discover these specs?
-
@hongminhee can you point me to the parser you use for fedify?
One of my long term plans for GoActivityPub is to built a code generation tool based on contexts and I would need some prior art to see what's important in parsing JSON-LD and RDF.
-
@hongminhee@hollo.social I'll give you my take on this... which is that my understanding of JSON-LD is that with JSON-LD you can have two disparate apps using the same property, like
thread, and avoid namespace collision because one is actuallyhttps://example.org/ns/threadand the other's reallyhttps://foobar.com/ns/thread.Great.
I posit that this is a premature optimization, and one that fails because of inadequate adoption. There are likely documented cases of implementations using the same property, and those concern the actual ActivityStreams vocabulary, and the solution to that is to communicate and work together so that you don't step on each others' toes.
I personally feel that it is a technical solution to a problem that can be completely handled by simply talking to one another... but we're coders, we're famously anti-social yes? mmmmm...
@pintoch read this thread?
-
@silverpill regarding size, ActivityPub is such a verbose protocol that the hundred or so of raw bytes you save through omitting context, are most likely negligible through the prism of connection compression. So to me that's not entirely a "valid reason".
And as developer myself, I think that contexts, even in a non valid JSON-LD implementation, offer enough guidance for building a data vocabulary for them to have plenty of value.
Do you propose we replace contexts with Open API specifications, or how do we coordinate what's a valid vocabulary data object in a federated network? And how do you propose that others discover these specs?
@silverpill personally I feel like the various activity/object signing methods that get used in recent FEPs are more egregious from a size point of view, when the in spec behaviour for obtaining canonical versions of a resource is to fetch them from their server, instead of relying on random object signing that introduces so much more friction.
-
@silverpill personally I feel like the various activity/object signing methods that get used in recent FEPs are more egregious from a size point of view, when the in spec behaviour for obtaining canonical versions of a resource is to fetch them from their server, instead of relying on random object signing that introduces so much more friction.
@mariusor@metalhead.club I thought the whole point of signing objects or attaching proofs (none of which I do, mind you) are precisely to save the need to make a new request, which comes with its own overhead.
The good thing is fetching from canonical source will never go out of style.
Aside, it seems like I'm only getting Marius's posts, not silverpills. Makes for an interesting one-sided exchange

-
@hongminhee can you point me to the parser you use for fedify?
One of my long term plans for GoActivityPub is to built a code generation tool based on contexts and I would need some prior art to see what's important in parsing JSON-LD and RDF.
@mariusor@metalhead.club It's barely documented, but has worked well so far!
fedify/packages/vocab-tools at main · fedify-dev/fedify
ActivityPub server framework in TypeScript. Contribute to fedify-dev/fedify development by creating an account on GitHub.
GitHub (github.com)
-
@mariusor@metalhead.club I thought the whole point of signing objects or attaching proofs (none of which I do, mind you) are precisely to save the need to make a new request, which comes with its own overhead.
The good thing is fetching from canonical source will never go out of style.
Aside, it seems like I'm only getting Marius's posts, not silverpills. Makes for an interesting one-sided exchange

> to save the need to make a new request
@julian probably. But then there's Mastodon that treats so many activities as transient, therefore unfetcheable, which I think is what made the object signing an actual necessity. And outside of the happy path where the actor that generated the object is already known to the server that receives it, there's still the need to fetch their key, so there's no savings for 10-20% (number out of my butt) of activities... As you can probably tell, to me the frictions introduced by signatures are not a good enough tradeoff to effecting one more request.
-
@silverpill lol, that's simply madness to me. See the sibling reply to Julian why I think signatures, which is what I imagine you mean by "authenticated" are an unnecessary contrievance.
I meant "data object" in this context as the end-result binary data type that your application deals with, which for my preference, needs to match the structure of the incoming payload as closely as possible.
-
@silverpill personally I feel like the various activity/object signing methods that get used in recent FEPs are more egregious from a size point of view, when the in spec behaviour for obtaining canonical versions of a resource is to fetch them from their server, instead of relying on random object signing that introduces so much more friction.
-
I have deeply mixed feelings about #ActivityPub's adoption of JSON-LD, as someone who's spent way too long dealing with it while building #Fedify.
Part of me wishes it had never happened. A lot of developers jump into ActivityPub development without really understanding JSON-LD, and honestly, can you blame them? The result is a growing number of implementations producing technically invalid JSON-LD. It works, sort of, because everyone's just pattern-matching against what Mastodon does, but it's not correct. And even developers who do take the time to understand JSON-LD often end up hardcoding their documents anyway, because proper JSON-LD processor libraries simply don't exist for many languages. No safety net, no validation, just vibes and hoping you got the
@contextright. Naturally, mistakes creep in.But then the other part of me thinks: well, we're stuck with JSON-LD now. There's no going back. So wouldn't it be nice if people actually used it properly? Process the documents, normalize them, do the compaction and expansion dance the way the spec intended. That's what Fedify does.
Here's the part that really gets to me, though. Because Fedify actually processes JSON-LD correctly, it's more likely to break when talking to implementations that produce malformed documents. From the end user's perspective, Fedify looks like the fragile one. “Why can't I follow this person?” Well, because their server is emitting garbage JSON-LD that happens to work with implementations that just treat it as a regular JSON blob. Every time I get one of these bug reports, I feel a certain injustice. Like being the only person in the group project who actually read the assignment.
To be fair, there are real practical reasons why most people don't bother with proper JSON-LD processing. Implementing a full processor is genuinely a lot of work. It leans on the entire Linked Data stack, which is bigger than most people expect going in. And the performance cost isn't trivial either. Fedify uses some tricks to keep things fast, and I'll be honest, that code isn't my proudest work.
Anyway, none of this is going anywhere. Just me grumbling into the void. If you're building an ActivityPub implementation, maybe consider using a JSON-LD processor if one's available for your language. And if you're not going to, at least test your output against implementations that do.
#JSONLD #fedidev
@hongminhee I'm reading this thread as a relative noob, but what I see again and again: almost no one "properly" implents #ActivityPub largely because #JSONLD is hard but also because the spec itself is unclear. Most people who get stuff done have to go off-spec to actually ship.
This seems a fundamental weakness of the #fediverse - and that disregarding the limitations coming from base architecture. Seems to pose a mid/long-term existential threat.
What can we do to help improve things?