This is a pretty good summary of what’s happening with DRM at the W3C
From my Github post:
I confess to being mystified by the argument – promulgated by DRM advocates – that standardization at W3C doesn’t matter to the viability of DRM on the web.
On the one hand, we have lots of urgent talk about the user harms arising from the difficulty of implementing DRM in the HTML5 world where NDAPI and its like have been abolished, leaving browser vendors and publishers to strike expensive, difficult-to-sustain deals as a series of one-offs to synchronize proprietary components at both ends that would create technical problems for users that cause them to reject the publishers’ products.
On the other hand, we have the argument that DRM on the web is inevitable and actually a fait accompli, entirely separate from the outcome of the W3C process, such that the decision not to publish EME as a W3C standard would make no “difference” (“difference” being the thing that we must seemingly enumerate in order to advance this debate).
But if DRM will happen regardless of W3C standardization, with no “difference,” then there will be no “difference” if the W3C doesn’t publish it, or requires members to agree to a nonaggression covenant as a condition of ng so.
DRM’s “difference” and inevitability is thus posed as simultaneously maximum and minimum, totally irrelevant and utterly salient. I believe the technical term for this in SDOs is “having one’s cake and eating it too.“
Thrown into this mix is the asserted inevitability of the web itself being sidelined in favor of apps and walled gardens if DRM doesn’t become part of HTML5, but this is usually uttered in the same breath as a blank assertion that DRM is coming to HTML5 no matter what the W3C does. Only one of these things can be true.
A note on accessibility: DRM laws make any accessibility features built into the spec the ceiling, not the floor, on accessibility. Notably, the current spec excludes any kind of third-party automated bulk or realtime processing, such as feeding cleartexts into a machine-learning system to spot and interdict seizure-causing strobes; to shift color-gamuts for color-blind people, or to add subtitles/descriptive tracks.
The oft-repeated assertion that humans could manually add these features to EME-locked videos is obviously deficient. UC Berkeley just killed 20,000 hours of instructional videos because they couldn’t adequately subtitle them – the fact that an army of humans who produced a set of subtitles could then add them to the video is nice, but in the absence of such an army, and in the presence of ever-better machine subtitling tools, it’s utterly, blatantly obvious that EME will stand in the way of the future of legitimate, powerful accessibility adaptation.
Is there anyone who believes that in the future the majority of accessibility adaptation for any media will be done by humans, working by hand? Here’s what I think:
DRM-protecting laws mean that making DRM easier to implement on the web makes the web intrinsically less open, less safe, and less accessible
- Standardization matters and makes technology more viable
- EME is unfinished and will require future versions (this was the argument for pursuing a W3C policy interest group that couldn’t affect EME – it would affect the inevitable future versions), so the W3C walking away from EME would have material effect on its viability
- This means that DRM standardization advocates need the W3C process to continue, and must work with people who want to safeguard open web equities in the Consortium if they are to make progress
- The EME process – and the W3C’s credibility – are now at a crossroads because DRM advocates literally refused any further discussion of this, 13 months ago, at an AC meeting in Cambridge
- As a result, we are now in a situation where a large plurality of W3C members do not want to see EME published until a covenant is arrived at, but having done nothing on that front for more than a year, we have a lopsided world where the technology is asserted to be ready for launch and the policy component is still on the drawing board
Whether a refusal to discuss this issue was a deliberate calculation or a tragic misjudgment, it was a terrible mistake. Because of a leadership decision to steamroller the opposition rather than compromise with it (or even continue talks with it), the W3C has, for the very first time in its history, arrived at the moment of publication with no consensus in sight, and no path to consensus in sight either.
Publication at this point would mark not one, but THREE sea-changes in the W3C’s nature:
- The W3C is now the kind of body that makes standards to allow browser vendors to restrict how users can use the data they receive
- The W3C is now the kind of body that allows members’ IPRs to control who may interoperate with its standards
- The W3C is now the kind of body where deeply divisive issues are settled by allowing one group to simply declare the other group to be out-of-bounds, out-of-touch, out-of-scope or out-of-order and to thus publish things that large numbers of its members have deep moral, technical and legal objections to, rather than deliberating and compromising to resolve these divisions.
DRM opponents at the W3C extended a significant compromise to DRM advocates: a covenant that would allow DRM users to enforce copyright, torts and trade secrecy (and every other right they have in law), while making DRM.
The members who want DRM insisted they would only proceed if DRM could also be a tool for asserting rights that no legislature ever granted them. That is what brought us to this juncture: an unwillingness on one side to make any compromise whatsoever.
That is not in the spirit of multistakeholder processes or the history of the W3C. Any future progress on EME at the W3C will require compromise on both sides, not blithe assertions that no "difference” is to be found in going down one path or the other.
Cory