posted about 2 hours ago on techdirt
In April, Donald Trump insisted he had no interest whatsoever in getting back on Twitter (in response to questions about whether or not Elon Musk would allow him back, should he ever close his Twitter purchase). In May, Donald Trump lost his lawsuit trying to force Twitter to reinstate him. In June, Donald Trump (who again, insists he wouldn’t even go back to Twitter if he were allowed to) decided to appeal the loss in his lawsuit in order to try to force Twitter to reinstate him. The fact that Donald Trump might state things contrary to the truth isn’t much of a surprise, of course. But at some point, you gotta wonder how much he wants to actually rack up legal bills for this nonsense victimization campaign. To be honest, I was a bit surprised Trump jumped straight to appeal here. The district court judge had left it open for him to amend his complaint, and I figured Trump would take one more crack at that before jumping to appeal. However, maybe he’s feeling high because his hand-picked Supreme Court Justices have started to show less and less restraint in using their lifetime appointments to settle political grievances — so perhaps he feels the faster he can get in front of today’s SCOTUS, the better. This case is a total loser, though, and it would take some seriously warped twisting of so much existing law, that even this court would likely find it difficult to force Twitter to reinstate Trump.

Read More...
posted about 11 hours ago on techdirt
Michael E. Karpeles, Program Lead on OpenLibrary.org at the Internet Archive, spotted an interesting blog post by Michael Kozlowski, the editor-in-chief of Good e-Reader. It concerns Amazon and its audiobook division, Audible: Amazon owned Audible ceased selling individual audiobooks through their Android app from Google Play a couple of weeks ago. This will prevent anyone from buying audio titles individually. However, Audible still sells subscriptions through the app (…) Karpeles points out that this is yet another straw in the wind indicating that the ownership of digital goods is being replaced with a rental model. He wrote a post last year exploring the broader implications, using Netflix as an example: What content landlords like Netflix are trying to do now is eliminate our “purchase” option entirely. Without it, renting become the only option and they are thus free to arbitrarily hike up rental fees , which we have to pay over and over again without us getting any of these aforementioned rights and freedoms. It’s a classic example of getting less for more. He goes on to underline four extremely serious consequences of this shift. One is the end of “forever access”. If the company adopting the rental model goes out of business, customers lose access to everything they were paying for. With the ownership of goods, even if the supplier goes bankrupt, you still have the product they sold to you. Secondly, the rental model effectively means the end of the public domain for material offered in that way. In theory, books, music, films and the rest that are under copyright should enter the public domain after a certain time – typically around a century after they first appeared. But when these digital goods are offered using the rental model, they usually come wrapped up in digital locks – digital rights management (DRM) – to prevent people exiting from the rental model by making a personal copy. That means that even if the company offering the digital goods is still around when the copyright expires, this content will remain locked-away even when it enters the public domain because it is illegal under copyright laws like the US DMCA and EU Information Society Directive to circumvent those locks. Thirdly, Karpeles notes, the rental model means the end of personal digital freedom in this sphere. Since you access everything through the service provider, the latter knows what you are doing with the rented material and when. How much it chooses to spy on you will depend on the company, but you probably won’t know unless you live somewhere like the EU where you can make a request to the company for the personal data that it holds about you. Finally, and perhaps least obviously, it means the end of the library model that has served us so well for hundreds of years. Increasingly, libraries are unable to buy copies of ebooks outright, but must rent them. This means that they must follow the strict licensing conditions imposed by publishers on how those ebooks are lent out by the library. For example, some publishers license ebooks for a set period of time – typically a year or two – with no guarantee that renewal will be possible at the end of that time. Others have adopted a metered approach that counts how many times an ebook is lent out, and blocks access after a preset number. Karpeles writes: Looking to the future, as more books become only available for lease as eBooks, I see no clear option which allows libraries to sustainably serve their important roles as reliable, long-term public access repositories of cultural heritage and human knowledge. It used to be the case that a library would purchase a book once and it would serve the public for decades. Instead, now at the end of each year, a library’s eBooks simply vanish unless libraries are able to find enough quarters to re-feed the meter. The option to own new digital goods or to access the digital holdings of public libraries may not be available much longer – enjoy them while you can. Follow me @glynmoody on Twitter, Diaspora, or Mastodon. Originally posted to Walled Culture.

Read More...
posted about 15 hours ago on techdirt
The terrible, awful, no good, horrible plans to regulate the internet keep coming faster and furiouser these days. So, it’s worth remembering a time back when Congress passed one of the worst laws about the internet: the Communications Decency Act. Yes, these days we talk about the CDA more reverently, but that’s only because we’re talking about the one part of it that wasn’t declared unconstitutional: Section 230. Section 230, of course, was never even supposed to be a part of the CDA in the first place. It was crafted by then Representatives Chris Cox and Ron Wyden as an alternative approach to the ridiculousness that was coming out of Senator James Exon in the Senate. But, you know, this is Congress, and rather than just do the right thing, it mashed the two approaches together in one bill and figured God or the courts would sort it out. And, thankfully, the courts did sort it out. Twenty-five years ago this week, the court decided Reno v. ACLU, dumped the entire CDA (minus Section 230) as blatantly unconstitutional, and, in effect, saved the internet. Jared Schroeder and Jeff Kosseff wrote up a nice article about the 25th anniversary of the Reno decision that is well worth reading. When faced with the first significant case about online expression, justices went in a completely different direction than Congress, using the Reno case to confer the highest level of protections on online expression. The case started when a broad coalition of civil liberties groups, business interests, and others, including the American Civil Liberties Union, American Library Association, Planned Parenthood Federation of America, and Microsoft, sued. A three-judge panel in Philadelphia struck down much of the law, and the case quickly moved to the Supreme Court. The federal government tried to justify these restrictions partly by pointing to a 1978 opinion in which the court allowed the FCC to sanction a radio station that broadcast George Carlin’s “seven dirty words.” Justices dismissed these arguments. They saw something different in the internet and rejected attempts to apply weaker First Amendment protections to the internet. Justices reasoned the new medium was fundamentally different from the scarce broadcast spectrum. “This dynamic, multifaceted category of communication includes not only traditional print and news services, but also audio, video, and still images, as well as interactive, real-time dialogue,” Justice John Paul Stevens wrote. “Through the use of chat rooms, any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of Web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer.” The article has a lot more details about the case, and why it’s still relevant. Also, how the messages from that ruling are still useful today as we are, once again, facing many attempts to regulate the internet. The precedent’s relevance isn’t in the case’s dated facts or romanticized predictions. Its enduring value is in the idea the internet should generally be protected from government control. Without the Supreme Court’s lucid and fervent defense of online free speech, regulators, legislators, and judges could have more easily imposed their values on the internet. There’s a lot more in that article, but go read it… on this very internet that would have been a very, very different place without that ruling.

Read More...
posted about 17 hours ago on techdirt
The world can be an awful, horrible place. Lately, it feels like, in America, things are only getting more difficult. And, because my country loves its scapegoats, the internet has been routinely blamed for all the country’s, perhaps the world’s, ills. Insurrections, political radicalization, obesity, poor socialization, literally any sub-optimal thing to do with children: blame the internet. But that’s obviously stupid. The internet is responsible for both good and bad outcomes in society, as is pretty much everything else. But the internet also is only as good or bad as those that make use of it. And sometimes, the internet enables really awesome stuff. Take the story of Sofie Dill, Seattle Mariners fan, and Simranjeet Singh, a DoorDash driver. This past weekend, without getting into too much detail, Jesse Winker was hit by a pitch while playing the L.A. Angels and a brawl between the teams ensued. Baseball fights are plainly dumb, but some fans enjoy them, or at least root for their players in the fight. To that end, Dill, from her home in Arkansas, decided to send Winker a pizza from a local Anaheim parlor to be delivered directly to the stadium. And, for added measure, she live-tweeted her DoorDash experience for everyone to follow along. I just ordered a pizza for Jesse Winker from @MountainMikes Pizza in Anaheim. You deserve it big guy @Mariners pic.twitter.com/AymUQvQ3r9 — Sofie (@sofieballgame) June 26, 2022 Baseball fan or not, you should go check out the full thread. It’s a harrowing journey to see if she could in fact deliver a pizza to a professional baseball player in a visiting Major League clubhouse to express her support. The spoiler here is that the pizza did in fact get delivered, Winker reached out to her on Twitter to say thanks, and a whole bunch of people were cheering on the DoorDash driver, Singh, as he went on his dutiful journey. As a result, Dill managed to get Singh to share his Venmo QR code and shared it out to Twitter. KING SIMRANJEET’S VENMO! simranjeet-Singh-13 Please show your appreciation my friends! pic.twitter.com/DoybuSy2Uy — Sofie (@sofieballgame) June 26, 2022 And from there, the internet did its thing. Plenty of folks started sending money to Singh’s Venmo. Other’s asked they could send him money via another platform. Singh himself started sending out tweets thanking everyone, clearly overjoyed at everyone’s generosity. Then, were that not enough, two other awesome outcomes happened, just to restore your faith in humanity. While I can’t be sure how much was donated to Singh, he certainly didn’t keep all of it for himself. "Alone we can do so little;Together we can do so much." A contribution from all the @Mariners fans and all the other people who [email protected] @Mariners @StJude @DoorDash @Gurmeetkaur1414 @stephenjnesbitt @sn_mlb#BlessedAndGrateful #viraltwitter #USA pic.twitter.com/cltiJ9bnj4 — Simranjeet Singh (@JeetBhamra4) June 27, 2022 There are good people in this world. Paying it forward would have been the feel good coda to this story on its own, but then the Mariners decided to get in on the fun as well. TONIGHT ONLY at T-Mobile Park! Get a FREE @Mariners pizza pin with purchase of a Jesse Winker player t-shirt or jersey! *While supplies last. Available at select locations only. Cannot be combined with any other offer. pic.twitter.com/dVx3Z0Yvgj — Mariners Team Store (@MarinersStore) June 27, 2022 Dill got herself a Winker jersey from the Mariners. Singh had what he describes as a life-changing event. Mariners fans got to have a ton of fun on Twitter with all of this. St. Jude’s got a donation. If there’s a loser in this story, I can’t find one. And all of this made possible by the evil, vile internet that too many people blame for every last thing.

Read More...
posted about 18 hours ago on techdirt
I’m continuing my coverage of dangerous Internet bills in the California legislature. This job is especially challenging during an election year, when legislators rally behind the “protect the kids” mantra to pursue bills that are likely to hurt, or at least not help, kids. Today’s example is AB 2273, the Age-Appropriate Design Code Act (AADC), Before we get overwhelmed by the bill’s details, I’ll highlight three crucial concerns: First, the bill pretextually claims to protect children, but it will change the Internet for EVERYONE. In order to determine who is a child, websites and apps will have to authenticate the age of ALL consumers before they can use the service. NO ONE WANTS THIS. It will erect barriers to roaming around the Internet. Bye bye casual browsing. To do the authentication, businesses will be forced to collect personal information they don’t want to collect and consumers don’t want to give, and that data collection creates extra privacy and security risks for everyone. Furthermore, age authentication usually also requires identity authentication, and that will end anonymous/unattributed online activity. Second, even if businesses treated all consumers (i.e., adults) to the heightened obligations required for children, businesses still could not comply with this bill. That’s because this bill is based on the U.K. Age-Appropriate Design Code. European laws are often aspirational and standards-based (instead of rule-based), because European regulators and regulated businesses engage in dialogues, and the regulators reward good tries, even if they aren’t successful. We don’t do “A-for-Effort” laws in the U.S., and generally we rely on rules, not standards, to provide certainty to businesses and reduce regulatory overreach and censorship. Third, this bill reaches topics well beyond children’s privacy. Instead, the bill repeatedly implicates general consumer protection concerns and, most troublingly, content moderation topics. This turns the bill into a trojan horse for comprehensive regulation of Internet services and would turn the privacy-centric California Privacy Protection Agency/CPPA) into the general purpose Internet regulator. So the big takeaway: this bill’s protect-the-children framing is designed to mislead everyone about the bill’s scope. The bill will dramatically degrade the Internet experience for everyone and will empower a new censorship-focused regulator who has no interest or expertise in balancing complex and competing interests. What the Bill Says Who’s Covered The bill applies to a “business that provides an online service, product, or feature likely to be accessed by a child.” “Child” is defined as under-18, so the bill treats teens and toddlers identically. The phrase “likely to be accessed by a child means it is reasonable to expect, based on the nature of the content, the associated marketing, the online context, or academic or internal research, that the service, product, or feature would be accessed by children.” Compare how COPPA handles this issue; it applies when services know (not anticipate) users are under-13 or direct their services to an under-13 audience. In contrast, the bill says that if it’s reasonable to expect ONE under-18 user, the business must comply with its requirements. With that overexpansive framing, few websites and apps can reasonably expect that under-18s will NEVER use their services. Thus, I believe all websites/apps are covered by this law so long as they clear the CPRA quantitative thresholds for being a “business.” [Note: it’s not clear how this bill situates into the CPRA, but I think the CPRA’s “business” definition applies.] What’s Required The bill starts with this aspirational statement: “Companies that develop and provide online services, products, or features that children are likely to access should consider the best interests of children when designing, developing, and providing that service, product, or feature.” The “should consider” grammar is the kind of regulatory aspiration found in European law. Does this statement have legal consequences or not? I vote it does not because “should” is not a compulsory obligation. So what is it doing here? More generally, this provision tries to anchor the bill in the notion that businesses owe a “duty of loyalty” or fiduciary duty to their consumers. This duty-based approach to privacy regulation is trendy in privacy circles, but if adopted, it would exponentially expand regulatory oversight of businesses’ decisions. Regulators (and private plaintiffs) can always second-guess a business’ decision; a duty of “loyalty” gives the regulators the unlimited power to insist that the business made wrong calls and impose punishments accordingly. We usually see fiduciary/loyalty obligations in the professional services context where the professional service provider must put an individual customer’s needs before its own profit. Expanding this concept to mass-market businesses with millions of consumers would take us into uncharted regulatory territory. The bill would obligate regulated businesses to: Do data protection impact assessments (DPIAs) for any features likely to be accessed by kids (i.e., all features), provide a “report of the assessment” to the CPPA, and update the DPIA at least every 2 years. “Establish the age of consumers with a reasonable level of certainty appropriate to the risks that arise from the data management practices of the business, or apply the privacy and data protections afforded to children to all consumers.” As discussed below, this is a poison pill for the Internet. This also exposes part of the true agenda here: if a business can’t do what the bill requires (a common consequence), the bill drives businesses to adopt the most restrictive regulation for everyone, including adults. Configure default settings to a “high level of privacy protection,” whatever that means. I think this meant to say that kids should automatically get the highest privacy settings offered by the business, whatever that level is, but it’s not what it says. Instead, this becomes an aspirational statement about what constitutes a “high level” of protection. All disclosures must be made “concisely, prominently, and using clear language suited to the age of children likely to access” the service. The disclosures in play are “privacy information, terms of service, policies, and community standards.” Note how this reaches all consumer disclosures, not just those that are privacy-focused. This is the first of several times we’ll see the bill’s power grab beyond privacy. Also, if a single toddler is “likely” to access the service, must all disclosures must be written at toddlers’ reading level? Provide an “obvious signal” if parents can monitor their kids’ activities online. How does this intersect with COPPA? “Enforce published terms, policies, and community standards established by the business, including, but not limited to, privacy policies and those concerning children.” This language unambiguously governs all consumer disclosures, not just privacy-focused ones. Interpreted literally, it’s ludicrous to mandate businesses enforce every provision in their TOSes. If a consumer breaches a TOS by scraping content or posting violative content, does this provision require businesses to sue the consumer for breach of contract? More generally, this provision directly overlaps AB 587, which requires businesses to disclose their editorial policies and gives regulators the power to investigate and enforce any perceived or alleged deviations how services moderate content. See my excoriation of AB 587. This provision is a trojan horse for government censorship that has nothing to do with protecting the kids or even privacy. Plus, even if it weren’t an unconstitutional provision, the CPPA, with its privacy focus, lacks the expertise to monitor/enforce content moderation decisions. “Provide prominent, accessible, and responsive tools to help children, or where applicable their parent or guardian, exercise their privacy rights and report concerns.” Not sure what this means, especially in light of the CPRA’s detailed provisions about how consumers can exercise privacy rights. The bill would also obligate regulated businesses not to: “Use the personal information of any child in a way that the business knows or has reason to know the online service, product, or feature more likely than not causes or contributes to a more than de minimis risk of harm to the physical health, mental health, or well-being of a child.” This provision cannot be complied with. It appears that businesses must change their services if a single child might suffer any of these harms, which is always? This provision especially seems to target UGC features, where people always say mean things that upset other users. Knowing that, what exactly are UGC services supposed to do differently? I assume the paradigmatic example are the concerns about kids’ social media addiction, but like the 587 discussion above, the legislature is separately considering an entire bill on that topic (AB 2408), and this one-sentence treatment of such a complicated and censorial objective isn’t helpful. “Profile a child by default.” “Profile” is not defined in the bill. The term “profile” is used 3x in the CPRA but also not defined. So what does this mean? “Collect, sell, share, or retain any personal information that is not necessary to provide a service, product, or feature with which a child is actively and knowingly engaged.” This partially overlaps COPPA. “If a business does not have actual knowledge of the age of a consumer, it shall not collect, share, sell, or retain any personal information that is not necessary to provide a service, product, or feature with which a consumer is actively and knowingly engaged.” Note how the bill switches to the phrase “actual knowledge” about age rather than the threshold “likely to be accessed by kids.” This provision will affect many adults. “Use the personal information of a child for any reason other than the reason or reasons for which that personal information was collected. If the business does not have actual knowledge of the age of the consumer, the business shall not use any personal information for any reason other than the reason or reasons for which that personal information was collected.” Same point about actual knowledge. Sell/share a child’s PI unless needed for the service. “Collect, sell, or share any precise geolocation information of children by default” unless needed for the service–and only if providing “an obvious sign to the child for the duration of that collection.” “Use dark patterns or other techniques to lead or encourage consumers to provide personal information beyond what is reasonably expected for the service the child is accessing and necessary to provide that service or product to forego privacy protections, or to otherwise take any action that the business knows or has reason to know the online service or product more likely than not causes or contributes to a more than de minimis risk of harm to the child’s physical health, mental health, or well-being.” No one knows what the term “dark patterns” means, and now the bill would also restrict “other techniques” that aren’t dark patterns? Also see my earlier point about the “de minimis risk of harm” requirement. “Use any personal information collected or processed to establish age or age range for any other purpose, or retain that personal information longer than necessary to establish age. Age assurance shall be proportionate to the risks and data practice of a service, product, or feature.” The bill expressly acknowledges that businesses can’t authenticate age without collecting PI–including PI the business would choose not to collect but for this bill. This is like the CCPA/CPRA’s problems with “verifiable consumer request”–to verify the consumer, the business has to ask for PI, sometimes more invasively than the PI the consumer is making the request about. ¯_(ツ)_/¯ New Taskforce The bill would create a new government entity, the “California Children’s Data Protection Taskforce,” composed of “Californians with expertise in the areas of privacy, physical health, mental health, and well-being, technology, and children’s rights” as appointed by the CPPA. The taskforce’s job is “to evaluate best practices for the implementation of this title, and to provide support to businesses, with an emphasis on small and medium businesses, to comply with this title.” The scope of this taskforce likely exceeds privacy topics. For example, the taskforce is charged with developing best practices for “Assessing and mitigating risks to children that arise from the use of an online service, product, or feature”–this scope isn’t limited to privacy risks. Indeed, it likely reaches services’ editorial decisions. The CPPA is charged with constituting and supervising this taskforce even though it lacks expertise on non-privacy-related topics. New Regulations The bill obligates the CPPA to come up with regulations supporting this bill by April 1, 2024. Given the CADOJ’s and CPPA’s track record of missing statutorily required timelines for rule-making, how likely is this schedule? Problems With the Bill Unwanted Consequences of Age and Identity Authentication. Structurally, the law tries to sort the online population into kids and adults for different regulatory treatment. The desire to distinguish between children and adults online has a venerable regulatory history. The first Congressional law to crack down on the Internet, the Communications Decency Act, had the same requirement. It was struck down as unconstitutional because of the infeasibility. Yet, after 25 years, age authentication still remains a vexing technical and social challenge. Counterproductively, age-authentication processes are generally privacy invasive. There are two primary ways to do it: (1) demand the consumer disclose lots of personal information, or (2) use facial recognition and collect highly sensitive face information (and more). Businesses don’t want to invade their consumers’ privacy these ways, and COPPA doesn’t require such invasiveness either. Also, it’s typically impossible to do age-authentication without also doing identity-authentication so that the consumer can establish a persistent identity with the service. Otherwise, every consumer (kids and adults) will have to authentication their age each time they access a service, which will create friction and discourage usage. But if businesses authenticate identity, and not just age, then the bill creates even greater privacy and security risks as consumers will have to disclose even more PI. Furthermore, identity authentication functionally eliminates anonymous online activity and all unattributed activity and content on the Internet. This would hurt many communities, such as minorities concerned about revealing their identity (e.g., LGBTQ), pregnant women seeking information about abortions, and whistleblowers. This also raises obvious First Amendment concerns. Enforcement. The bill doesn’t specify the enforcement mechanisms. Instead, it wades into an obvious and avoidable tension in California law. On the one hand, the CPRA expressly negates private rights of action (except for certain data security breaches). If this bill is part of the CPRA–which the introductory language implies–then it should be subject to the CPRA’s enforcement limits. CADOJ and CPPA have exclusive enforcement authority over the CPRA, and there’s no private right of action/PRA. On the other hand, California B&P 17200 allows for PRAs for any legal violation, including violations of other California statutes. So unless the bill is cabined by the CPRA’s enforcement limit, the bill will be subject to PRAs through 17200. So which is it?  ¯\_(ツ)_/¯ Adding to the CPPA’s Workload. The CPPA is already overwhelmed. It can’t make its rule-making deadline of July 1, 2022 (missing it by months). That means businesses will have to comply with the voluminous rules with inadequate compliance time. Once that initial rule-making is done, the CPPA will then have to build a brand-new administrative enforcement function and start bringing, prosecuting, and adjudicating enforcements. That will be another demanding, complex, and time-consuming project for the CPPA. So it’s preposterous that the California legislature would add MORE to the CPPA’s agenda, when it clearly cannot handle the work that the California voters have already instructed it to do. Trade Secret Problems. Requiring businesses to report about their DPIAs for every feature they launch potentially discloses lots of trade secrets–which may blow their trade secret protection. It certainly provides a rich roadmap for plaintiffs to mine. Conflict with COPPA. The bill does not provide any exceptions for parental consent to the business’ privacy practices. Instead, the bill takes power away from parents. Does this conflict with COPPA such that COPPA would preempt it? No doubt the bill’s basic scheme rejects COPPA’s parental control model. I’ll also note that any PRA may compound the preemption problem. “Allowing private plaintiffs to bring suits for violations of conduct regulated by COPPA, even styled in the form of state law claims, with no obligation to cooperate with the FTC, is inconsistent with the treatment of COPPA violations as outlined in the COPPA statute.” Hubbard v. Google LLC, 546 F. Supp. 3d 986 (N.D. Cal. 2021). Conflict with CPRA’s Amendment Process. The legislature may amend the CPRA by majority vote only if it enhances consumer privacy rights. As I’ve explained before, this is a trap because I believe the amendments must uniformly enhance consumer privacy rights. In other words, if some consumers get greater privacy rights, but other consumers get less privacy rights, then the legislature cannot make the amendment via majority vote. In this case, the AADC undermines consumer privacy by exposing both children and adults to new privacy and security risks through the authentication process. Thus, the bill, if passed, could be struck down as exceeding the legislature’s authority. In addition, the bill says “If a conflict arises between commercial interests and the best interests of children, companies should prioritizes the privacy, safety, and well-being of children over commercial interests.” A reminder of what the CPRA actually says: “The rights of consumers and the responsibilities of businesses should be implemented with the goal of strengthening consumer privacy, while giving attention to the impact on business and innovation.” By disregarding the CPRA’s instructions to consider impacts on businesses, this also exceeds the legislature’s authority. Dormant Commerce Clause. The bill creates numerous potential DCC problems. Most importantly, businesses necessarily will have authenticate the age of all consumers, both in and outside of California. This means that the bill would govern how businesses based outside of California interact with non-Californians, which the DCC does not permit. Conclusion Due to its scope and likely impact, this bill is one of the most consequential bills in the California legislature this year. The Internet as we know it hangs in the balance. If your legislator isn’t paying proper attention to those consequences (spoiler: they aren’t), you should give them a call. Originally posted to Eric Goldman’s Technology & Marketing Law blog. Reposted with permission.

Read More...
posted about 18 hours ago on techdirt
The Ultimate Excel VBA Bundle and Microsoft Office Professional Plus 2021 for Windows will take you from Excel beginner to expert in no time. Over 13 courses, you’ll learn how to use Excel VBA to do a variety of tasks: how to create your first macro from scratch,  how to control mouse and keyboard commands, how to extract and manipulate data, and more. You’ll also get a license for Microsoft Office Professional Plus 2021 for Windows. The bundle is on sale for $59. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Read More...
posted about 19 hours ago on techdirt
Israeli phone malware manufacturer NSO Group has plenty of customers. Or at least it did until the Israeli government edited the company’s list of approved customers and the US government slapped sanctions on it. NSO has sold its malware to plenty of abusive governments with long histories of human rights violations. It has also sold its products to countries far less notorious for human rights abuse, but who still misused the company’s powerful Pegasus malware to target dissidents, political opponents, and government critics. Facing pressure and criticism from pretty much every country that doesn’t openly engage in human rights abuses, NSO Group is trying to survive several months of bad press, sanctions, and dwindling funding. When not courting potential purchasers who may not care about the company’s sordid past, NSO Group reps are answering questions posed to them by lawmakers who appear to be poised to engage in more direct regulation of malicious code. According to this report by Antoaneta Roussi for Politico, the spyware developer has publicly admitted it has a handful of European customers. The Israeli spyware firm NSO Group on Tuesday told European lawmakers at least five EU countries have used its software and the firm has terminated at least one contract with an EU member country following abuse of its Pegasus surveillance software. Speaking to the European Parliament’s committee looking into the use of spyware in Europe, NSO Group’s General Counsel Chaim Gelfand said the company had “made mistakes,” but that it had also passed up a huge amount of revenue, canceling contracts since misuse had come to light. “At least five” leaves a whole lot open to interpretation. And counting any number accurately seems like something a tech company that has developed some of the most fiendishly clever malware ever created should be able to do easily. Providing an accurate total should be well within its technological grasp. But, much like the FBI and its billions in funding can’t seem to count the number of encrypted devices in its evidence lockers, NSO Group appears to be unable to count the number of European customers it has in total during testimony it was informed ahead of time it would need to attend. That’s all NSO could provide, apparently. And it’s not much. We already know Poland is an NSO customer. (And it’s still part of Europe, no matter what the Russian government would prefer at the moment.) And it seems pretty clear the Spanish government has deployed the malware. Phones owned by Catalan members of the EU Parliament were hit with Pegasus malware and the Spanish government has made no secret of its desire to crush the Catalan independence movement. That’s two out of the “at least five.” Every other country in the European Union has “national security interests” and a desire to fight crime — two justifications used by NSO to move its product — so it stands to reason the number of European customers is much greater than the “at least five” NSO claims to have. More ridiculous than this open-ended (but still seemingly small!) number the NSO handed to EU lawmakers is the follow-up statement by its general counsel. At least five EU countries had used NSO’s tool, Gelfand said, adding he would come back to MEPs with a “more concrete number.”  “Come back?” Are you kidding? How does NSO’s lawyer not have the actual number readily available? How was it not possible to have the actual number sent to him during this inquiry, moments after asking for it from NSO’s executives or account managers? The only answer for this lack of accurate information is someone doesn’t want it revealed. NSO may not want to let the rest of the world know how many customers it has in Europe, especially given the propensity of its customers to abuse its products. And plenty of EU members may not want the public to know they’ve been buying powerful tech tools from a shady digital arms dealer. Claiming you’ll come back with an answer when you already have instant access to one is pure bullshit. Granted, it’s the kind of bullshit you pay your general counsel handsomely to deliver when facing government inquiries but it’s not the sort of thing that endears you to regulators or the public they serve. This inability to count past five is going to do more reputational damage to a company that literally cannot afford it.

Read More...
posted about 21 hours ago on techdirt
As we’ve discussed before, Supreme Court Justice Clarence Thomas really does not like the “actual malice” standard required to make a defamation claim against a public figure, as laid out in the extremely important NY Times v. Sullivan case. The actual malice standard confuses many people, because it’s not actually about malice. The standard is that for there to be defamation of a public figure, it needs to be expressed while the speaker knows that the claims are false, or with “reckless disregard” for whether it’s true. And even the “reckless disregard” part is often misunderstood. It’s a much higher bar than simply being negligent. It means that that the speaker had serious doubts at the time of expression that the speech was false. This standard has been a huge benefit for freedom of speech. Especially in an era when the rich and powerful use abusive SLAPP suits to drag critics into court with no hope of actually winning. Being able to highlight the lack of any evidence for actual malice has been tremendously helpful in getting many cases kicked out of court at the first opportunity. However, this is exactly why some rich and powerful people are very much against that standard. And that’s even though for years this was considered settled law, with almost no one challenging the standard at all. And then, in 2019 Clarence Thomas tossed out a bizarre hand grenade, in announcing that he thought it was time to revisit the actual malice standard. That has kicked off a series of strategic lawsuits with the goal of getting the Supreme Court to do exactly that. Things got slightly scarier last year when Thomas once again made the same argument, and this time got Neil Gorsuch to make a similar argument. Last year also saw Clarence Thomas’ own mentor, DC Circuit Judge Lawrence Silberman pen an even more unhinged attack on the actual malice standard, which he claims only enables the mainstream media to be mean to his conservative buddies. He basically argues that if only we got rid of it, the media could be more like those awesome folks at Fox News, being nice to conservatives. So, there had been some concern this week that the Supreme Court might grant the cert petition for Coral Ridge v. SPLC, a case that is attempting to take up Thomas on his offer to ditch the standard. Thankfully, the court said no. But, it gave Thomas yet another chance to dissent and rant more about the actual malice standard… citing his mentor’s unhinged rant in support. I would grant certiorari in this case to revisit the “actual malice” standard. This case is one of many showing how New York Times and its progeny have allowed media organizations and interest groups “to cast false aspersions on public figures with near impunity.” Tah, 991 F. 3d, at 254 (opinion of Silberman, J.). SPLC’s “hate group” designation lumped Coral Ridge’s Christian ministry with groups like the Ku Klux Klan and Neo-Nazis. It placed Coral Ridge on an interactive, online “Hate Map” and caused Coral Ridge concrete financial injury by excluding it from the AmazonSmile donation program. Nonetheless, unable to satisfy the “almost impossible” actual-malice standard this Court has imposed, Coral Ridge could not hold SPLC to account for what it maintains is a blatant falsehood. About the only good thing you can say here is that Gorsuch, nor any of the other Justices, didn’t sign on to Thomas’ dissent or issued their own attacks on actual malice. So, thankfully, for at least some time, this core 1st Amendment standard remains standing. Of course, while we wait for Thomas to convince others, Congress could take action. Check that: Congress should take action. Congress can and should codify the actual malice standard in law. Hell, why not go crazy and not just codify the actual malice standard into law, but pair it with a strong, functioning federal anti-SLAPP law that would allow defendants dragged into court as an intimidation and speech suppression tactic to get cases kicked out of court quickly — and force the abusive plaintiffs to pick up the bill?

Read More...
posted about 24 hours ago on techdirt
On the one hand, content moderation at the scale modern social media companies operate at is an impossible nightmare. Companies are always going to lack the staff and resources to do it well (raising questions about the dangers of automation at scale), and they’re always going to screw things up for reasons well discussed. At the same time, there’s Facebook. A company whose executive leadership team often compounds these challenges by making the worst and most idiotic decisions possible at any particular moment. Case in point: the company appears to have consciously embraced the policy of banning Facebook and Instagram users for saying they might mail abortion pills in the wake of the Supreme Court’s overturning Roe: To corroborate this activity, on Friday a Motherboard reporter attempted to post the phrase “abortion pills can be mailed” on Facebook using a burner account. The post was flagged within seconds as violating the site’s community standards, specifically the rules against buying, selling, or exchanging medical or non-medical drugs. The reporter was given the option to “disagree” with the decision or “agree” with it. After they chose “disagree,” the post was removed.  Again, we’re not just talking about blocking websites that actually mail abortion pills. Reporters at Vice’s Motherboard found that even publicly acknowledging that abortion pills exist and could be mailed resulted in an account ban: Other reporters have confirmed the changes. Facebook refuses to reverse the bans or even respond to reporter inquiries into the policy, which are consciously bad choices, not content moderation at scale problems. The company’s systems claim that even mentioning that these pills exist violates its community standards related to “restricted goods and services.” Yet when other reporters made similar posts promising to mail marijuana or guns, there were no restrictions: The Facebook account was immediately put on a “warning” status for the post, which Facebook said violated its standards on “guns, animals and other regulated goods.” Yet, when the AP reporter made the same exact post but swapped out the words “abortion pills” for “a gun,” the post remained untouched. A post with the same exact offer to mail “weed” was also left up and not considered a violation. Marijuana is illegal under federal law and it is illegal to send it through the mail. Activist groups like Fight For the Future were decidedly unimpressed, saying the policy foretold uglier things to come as the far right continues to push its court-enabled advantage: Facebook’s censorship of critical reproductive healthcare information and advocacy should be a massive, code-red warning to Democrats who want to revise or repeal Section 230. In a post-Roe environment, litigation-fearing platforms will cover their hides by tearing down online access to abortion healthcare and support. Facebook, no stranger to sucking up to and amplifying the authoritarian right, has also tried to restrict employees from talking about abortion bans at work, triggering a backlash. The company is also finding itself under fire after it classified one prominent pro-choice activism group a terrorist organization. Countless tech companies, including Facebook, have failed to even issue basic platitudes on securing women’s location, app usage, or browsing data from state officials (or vigilantes) looking to punish women in the wake of Roe’s reversal. Again, this initial lack of any meaningful backbone whatsoever in the face of one of the most wide-reaching, transformative, legally dubious, and dangerous political projects in a generation doesn’t exactly instill confidence that Facebook will make sound decisions as U.S. authoritarianism accelerates and a radical court steadily chips away at democratic norms and long-established law.

Read More...
posted 1 day ago on techdirt
Donald Trump promised to take the social media world by storm with his Truth Social Twitter-clone for the MAGA world. “Free speech!” he claimed as he banned anyone who criticized him. Of course, from the beginning, many suspected that this was all a very sketchy grift, using a SPAC to try to cash in on gullible MAGA folks willing to pump up a shell company stock well beyond what it could possibly be worth. Except, everything continues to fall apart. Even with Trump himself finally starting to “Truth” it up during the January 6th hearings (which have been quite damning), the site is struggling to remain relevant. It’s even gotten to the point that the ban and block finger is so heavy that he’s blocking some of his biggest fans, and they’re not happy about it. It’s almost as if it was never about “free speech” after all. Oh, and also, Reuters has a giant report on how the company is having trouble attracting tech talent, and that other tech companies are steering way clear of partnering with the company as well, because it’s seen as such a toxic asset all around. Truth Social last summer started recruiting tech talent. Executives sought to find ideologically aligned staffers, in at least one case scanning candidates’ social media and listening to their appearances on podcasts, according to a person familiar with company operations. But the company struggled to woo skilled tech workers, regardless of their politics, according to three people with knowledge of the recruiting efforts. Those with the company’s preferred conservative politics, or at least a commitment to its stated free-speech mission, were in short supply, they said. And tech workers with liberal or moderate politics usually wanted nothing to do with the Trump company. One person approached by TMTG told Reuters it was an easy offer to refuse. Beyond a distaste for Trump’s politics, this person cited concerns about the former president’s history of business failures – the DWAC filing lists six Trump entities that have filed for bankruptcy – and about TMTG’s financing arrangements. But, perhaps the bigger threat on the horizon is that the SPAC shell game with Digital World Acquisition Corp. may be in serious doubt. The SEC has ramped up its investigation into what’s going on here. And, as Liz Dye at Wonkette points out, things seemed to get even worse, as DWAC is now facing a grand jury investigation as well: The SEC investigation continues apace. But in the meantime, DWAC disclosed this morning that the company and its board members had all gotten subpoenas from a federal grand jury in the Justice Department’s office in the Southern District of New York. And that shark tank does not hand out investment capital. The grand jury is seeking substantially the same information as the SEC, and it has specific questions about Miami investment firm Rocket One Capital. CNBC reports that DWAC board member Bruce Garelick resigned last Wednesday. Garelick is — or perhaps was — the chief strategy officer for Rocket One. We’d check his status, but as of this morning, the company’s website looks like this: Probably just a coincidence, right? Oof. Anyway, at the very least, all of this is going to delay the SPAC deal, and it may delay it permanently. The gang that couldn’t shoot straight can’t even pull off the whole cashing out part of the grift for its social media site that barely works, is having trouble attracting a meaningful userbase, and is kicking people off for any kind of wrongspeak.

Read More...
posted 1 day ago on techdirt
Late last year we discussed a plainly stupid trademark lawsuit brought by Dairy Queen, which makes tasty frozen snacks, and W.B. Mason which is a strange combination of furniture and grocery store. At issue was the latter’s attempt to trademark some bottled water it sells under the brand “Blizzard Water”. Notably, W.B. Mason had sold water under that brand since 2010 without issue and it was only when Dairy Queen caught wind of the trademark application that it decided to sue over the potential for confusion with its blizzard ice cream products. If you didn’t read that original post, you’re probably now wondering why this is a thing at all, given that water and ice cream are very much not the same products and that the two companies operate in different marketplaces. Well, according to Dairy Queen’s suit, it sells blizzards as noted, and it also sells Dasani bottled water, therefore there would be customer confusion. Fortunately, in a massive decision, the court saw how silly that argument was and found in favor of W.B. Mason. In a 217-page decision made public on Friday, U.S. District Judge Susan Richard Nelson found a lack of evidence that consumers were confused by the Blizzards or that W.B. Mason, an office products distributor, intended to confuse anyone. While acknowledging that W.B. Mason, which has two trademarks for Blizzard copy paper, was not a competitor, Dairy Queen said consumers might be confused because its U.S. restaurants sell bottled water. But the judge said the products had “very different audience appeal,” and co-existed for 11 years despite evidence that Dairy Queen’s Blizzard had achieved “iconic” status, with U.S. sales reaching $1.1 billion in 2020. Notably, as part of the facts the court uncovered and laid out in its decision, W.B. Mason doesn’t even sell its water direct to consumers. Instead, it sells water to be used in office break rooms, as the majority of its business is in office furniture. On top of that, the court points out that Dairy Queen offered no evidence of any actual customer confusion that occurred over nearly a decade. “Dairy Queen introduced no evidence of an actual association between the two products,” Nelson wrote. “If association were to occur, in all likelihood, it would have occurred by now.” Dairy Queen has made some noises about appealing the ruling, but I doubt that will happen. This whole thing has been a trademark suit nothingburger from the start.

Read More...
posted 1 day ago on techdirt
Ever since it came into effect, we’ve been calling out how the EU’s General Data Protection Regulation (GDPR) was an obviously problematic bit of legislation. In the four years since it’s gone into effect, we’ve seen nothing to change that opinion. For users, it’s been a total nuisance. Rather than take the big US internet companies down a notch, it’s only harmed smaller (often EU-based) internet companies. Multiple studies have shown that it hasn’t lived up to any of its promises, and has actually harmed innovation. And don’t get me started on how the GDPR has done massive harm to free speech and journalism. But, for the past four years, within EU policy circles, it has been entirely taboo to even suggest that maybe the EU made a mistake four years ago with the GDPR. Any time we’ve suggested it, we’ve received howls of indignation from “data protection” folks in the EU, who insist that we’re wrong about the GDPR. However, sooner or later someone had to realize that the emperor had no clothes. And in a surprising move, the first EU official apparently willing to do so is Wojciech Wiewiórowski, the EU’s Data Protection Supervisor. So far, officials at the EU level have put up a dogged defense of what has become one of their best-known rulebooks, including by publicly pushing back against calls to punish Ireland for what activists say is a failure to bring Big Tech’s data-hungry practices to heel. Now, one of the European Union’s key voices on data protection regulation is breaking the Brussels taboo of questioning the bloc’s flagship law’s performance so far. “I think there are parts of the GDPR that definitely have to be adjusted to the future reality,” European Data Protection Supervisor Wojciech Wiewiórowski told POLITICO in an interview earlier this month. Wiewiórowski, who leads the EU’s in-house privacy regulator, is gathering data protection decision-makers in Brussels Thursday-Friday to open the debate about the GDPR’s failings and lay the groundwork for an inevitable revaluation of the law when the new EU Commission takes office in 2024. Of course, what’s funny is that when that event actually happened, the complaints were not about how maybe the entire approach of the GDPR was wrong, but that the real problem is that the Irish Data Protection Commission wasn’t willing to fine Google and Facebook enough. European Data Protection Supervisor Wojciech Wiewiórowski on Friday said there isn’t enough privacy enforcement against tech companies like Meta and Google, hinting at a bigger role for a “pan-European” regulator. In a speech marking the end of a two-day conference designed to scrutinize the EU’s flagship privacy code, the General Data Protection Regulation or GDPR, Wiewiórowski said enforcers had so far failed to rein in data protection abuses by big companies. “I also see hopes that certain promises of the GDPR will be better delivered. I myself share views of those who believe we still do not see sufficient enforcement, in particular against Big Tech,” he said. This is really a “no, it’s the children who are wrong” moment of clarity. The GDPR was sold to the European technocrats as “finally” a way to put Google and Facebook in their place. But, in practice, as multiple studies have shown, the two companies have been mostly just fine, and it’s a bunch of their competitors that have been wiped out by the onerous compliance costs. Rather than recognizing that maybe the whole concept behind the GDPR is the problem, they’ve decided the problem must be the enforcer in Ireland (where most of the US internet companies have their EU headquarters) so the answer must be to move the enforcement to the EU itself. Basically, the EU expected the GDPR to be a regular tool for slapping fines on American internet companies, and now that this hasn’t come to pass, the problem must be with the enforcer not doing its job, rather than the structure of the law itself. That means… it’s likely only going to get worse, not better.

Read More...
posted 1 day ago on techdirt
The Internet of things — aka the tendency to bring Internet connectivity to devices whether they need them or not — has provided no shortage of both tragedy and comedy. “Smart” locks that are easy to bypass, “smart” fridges that leak your email credentials, or even “smart” barbies that spy on toddlers are all pretty much par for the course in an industry with lax privacy and security standards. Even your traditional hot tub isn’t immune from the stupidity. Hot tub vendor SmartTub thought it might be nice to control your hot tub from your phone (because walking to the tub and quickly turning a dial is clearly too much to ask). But like so many IOT vendors more interested in the marketing potential than the reality, they allegedly implemented it without including basic levels of security standards for their website administration panel, allowing hackers to access and control hot tubs, all over the planet. And not just SmartTub brands, but numerous brands from numerous manufacturers, everywhere: Eaton used a program called Fiddler to intercept and modify some code that told the website they were an admin, not just a user. They were in, and could see a wealth of information about Jacuzzi owners from around the world. “Once into the admin panel, the amount of data I was allowed to was staggering. I could view the details of every spa, see its owner and even remove their ownership,” he said. “Please note that no operations were attempted that would actually change any data. Therefore, it’s unknown if any changes would actually save. I assumed they would, so I navigated carefully.” Security researcher EatonWorks documented all of his findings here. Again, not everything needs to have Internet functionality, and often dumb tech is the smarter option. Especially not if you’re not willing to take the time and money needed to do it correctly.

Read More...
posted 1 day ago on techdirt
As we see more and more western countries looking to regulate the internet in order to stifle speech they dislike, we’ve noted how much these efforts seem to be almost directly modeled on how China censors the internet. You might think that would be a reason to run in the other direction, but too many policymakers seem to now view China’s Great Firewall as a success story to be followed. And, now they may get some new ideas, as China has pushed out a draft of revisions to its regulations regarding online commenting. And, while some of it is unclear, it appears to include a provision saying that services that enable comments need to have tools in place to review every comment before it can be viewed on the site. Specifically, the draft regulations include this section: Establish and complete information security systems for the review and management, real-time inspection, emergency response, and the acceptance of reports for post comments, to review the content of post comments before publication, and promptly discover and address unlawful and negative information, and report it to the internet information departments. For somewhat obvious reasons, that’s raising some concerns. As the Tech Review article linked above notes, online comments and other more real-time communications have always been a sort of loophole regarding the Great Firewall, as discussions on sensitive topics often breakthrough there, even if only to be deleted later. However, this new rule seems to be setting up a system to block even that. There’s a need for a stand-alone regulation on comments because the vast number makes them difficult to censor as rigorously as other content, like articles or videos, says Eric Liu, a former censor for Weibo who’s now researching Chinese censorship at China Digital Times.  “One thing everyone in the censorship industry knows is that nobody pays attention to the replies and bullet chats. They are moderated carelessly, with minimum effort,” Liu says.  But recently, there have been several awkward cases where comments under government Weibo accounts went rogue, pointing out government lies or rejecting the official narrative. That could be what has prompted the regulator’s proposed update. Tech Review quotes people saying that it’s unlikely (for now) that Beijing will require everyone to pre-review every comment (recognizing that’s likely to be impossible), but that it will put pressure on sites to be much more proactive, and that it could force this “feature” to be used on highly controversial topics. It does seem that a straightforward reading of the law is that it requires sites to at least build out the functionality to pre-approve all comments if need be, even if it does not need to be on all the time. There are some other features in the new regulations, including granting more power to who can block comments, suggesting that content creators themselves will have more power to censor comments in response to their content (rather than relying on the service’s in-house censors to do so). Also, I note that part of these requirements would make Elon Musk and others who insist that every user should be “verified” even if their identities are not disclosed publicly. As the rules require: Follow the principle of ‘real names on file, but whatever you want up front’ , to conduct verification of identification information for registered users, and must not provide post comment services to users whose identification information has not been verified. So, for all of the folks out there insisting that all internet users who are commenting should have identifying information on tap, in case it’s needed, just know that you’re following in the footsteps of Chinese censors. And, of course, the new regulations also seek to tie that verified identity to China’s infamous social credit scoring system, though amusingly this is framed as part of privacy protections. Establish and complete systems for the protection of users’ personal information: the handling of users’ personal information shall comply with the principles of legality, propriety, necessity, and creditworthiness; disclose rules for handling personal information: giving notice of the goals and methods of handling personal information, the types of personal information to be handled, the period for retention, and other such matters; and obtain the consent of the individuals in accordance with law, except as otherwise provided by laws and administrative regulations. The people pushing for similar ideas in Europe and the US insist that it won’t be abused, but we can look to China — and the fact that many of the proposed regulations we’re seeing today originated as part of China’s Great Firewall for censorship to see where they likely lead.

Read More...
posted 1 day ago on techdirt
I had to rewrite this post before it got published. I originally began it with some whimsy in response to the absurdity that copyright cases like these always engender. The idea that people could ever use their rights in previous expression to forbid someone else’s subsequent expression is almost too absurd to take seriously as an articulation of law. And, according to the Supreme Court, at least in the past, it wasn’t the law. Fair use is supposed to allow people use pre-existing expression to say new things. In fact, if the new expression did say new things, then it is absolutely should be found fair use. In other words, the Second Circuit got things very wrong in the Andy Warhol/Prince prints case, and also the Ninth Circuit in the ComicMix/Dr. Seuss case. And so the Copia Institute filed an amicus brief at the Supreme Court, which agreed to review the Second Circuit’s decision, to say so. But in light of the Supreme Court’s most recent decisions, I had to take out the whimsy. Assuming that Constitutional rights can survive this Court’s review has become an iffy proposition and not one where any lightheartedness can be tolerated. Our brief was all about pointing out how free speech is chilled when fair uses are prohibited, and how, if the Court would like not to see that constitutional right extinguished too, it needs to overturn the decision from the Second Circuit. In that decision the Second Circuit last year had found that Andy Warhol’s Prince prints did not constitute a fair use of Lynn Goldsmith’s photograph of the musician Prince. But the problem with that decision isn’t just what it means for Warhol, or the Andy Warhol Foundation for the Visual Arts (AWF) that now controls the rights in his works, but what it means for everyone, because to find his work wasn’t fair use would mean that many fewer works ever could be fair uses in the future. And such a reality would be in conflict with what the Supreme Court had previously said about fair use in the past. Sadly, even when it comes to copyright, the Supreme Court has had a few absolute clunkers of decisions, like Aereo (“smells like cable!”), Golan (snatching works back from the public domain), and Eldred (okaying the extending of copyright terms beyond all plausible usefulness). But even in those last two cases the Court still managed to reaffirm how copyright law was always supposed to comport with the First Amendment, and how fair use was a mechanism baked into copyright to ensure copyright vindicated those values. And the Court also has since reiterated how expansive fair use must be to vindicate them, most notably in the Google v. Oracle case last year, which reaffirmed its earlier fair use-protecting decision in Campbell v. Acuff-Rose (involving the 2LiveCrew parody of “Pretty Woman”). Unfortunately, however, the Second Circuit’s decision was out of step of both those fair use decisions, which is why AWF petitioned for Supreme Court review, probably a big reason why review was granted, and why the Copia Institute has now weighed in to support their position with our own amicus brief. In our brief we made the point that copyright law has to be consistent with two constitutional provisions: the Progress Clause, which gives Congress the authority to pass law that “promotes the progress of science and the useful arts,” and the First Amendment, which prohibits Congress from passing a law that impinges on free expression. As long as copyright law promotes expression, it is potentially constitutional, but if it impinges on expression, then it couldn’t be constitutional, under either provision. (We also pointed to the dissents by Justice Breyer in Golan and Eldred, which cogently and persuasively made these points, because with him leaving the Court this month those dissents are the only way he can continue to speak to the Court’s future consideration of such an important question of free expression.)  The issue here in this case, however, is not that Congress tried to make a copyright-related law that was unconstitutional, but that the Second Circuit interpreted its copyright law in a way that now rendered it unconstitutional with its limiting read of the fair use provision that would now stand to chill myriad future expression, which even the majority decision in Eldred cast aspersions on courts doing. We also pointed out how it would be so chilling to new expression by citing the Ninth Circuit’s even more terrible decision in the ComicMix case, where, like the Second Circuit, it similarly had found the fair use provision to be much more narrowly applicable to new expression than the Supreme Court had, and we used that case to help illustrate why the reasoning of the Second Circuit was so untenable. In particular, both these decisions negated the degree to which the original works were transformed to convey new meanings not present in the original, extended the exclusive powers of a copyright holder far beyond what the statute itself authorized, and threatened to choke off new expression building on previous works for generations, given the extraordinary length of copyright terms. As the ComicMix case illustrated so saliently, if this be the rule, then the dead have the power to gag the living, and that reality cannot possibly be consistent with a law designed to foster the creation of new expression. Then we concluded by noting that it’s a fallacy to presume that giving more and more power to a copyright holder translates into more expression. Not only is there plenty of evidence to show that more copyright power is unnecessary for stimulating more expression, but, what these cases illustrate is that more power will ultimately result in even less. Other amicus briefs are available on the Supreme Court’s docket page. We now await the response from Goldsmith and her amici, and oral argument, currently scheduled for October 12. And assuming precedent and actual Constitutional text still matter at all, a decision hopefully reversing the Second Circuit and reaffirming the right to free expression that fair use doctrine is supposed to protect.

Read More...
posted 2 days ago on techdirt
Law enforcement agencies have access to very powerful digital tools. Thanks to companies with eyes on market expansion but very little consideration of moral or ethical issues, cops have the power to completely compromise phones, turning them into unwitting informants… or worse. This blockbuster report — written by Andy Greenberg for Wired and based on research performed by Citizen Lab and SentinelOne — shows cops can use powerful malware to create the probable cause they need to start arresting people. The fix is in. More than a year ago, forensic analysts revealed that unidentified hackers fabricated evidence on the computers of at least two activists arrested in Pune, India, in 2018, both of whom have languished in jail and, along with 13 others, face terrorism charges. Researchers at security firm SentinelOne and nonprofits Citizen Lab and Amnesty International have since linked that evidence fabrication to a broader hacking operation that targeted hundreds of individuals over nearly a decade, using phishing emails to infect targeted computers with spyware, as well as smartphone hacking tools sold by the Israeli hacking contractor NSO Group. But only now have SentinelOne’s researchers revealed ties between the hackers and a government entity: none other than the very same Indian police agency in the city of Pune that arrested multiple activists based on the fabricated evidence. I get it. Who doesn’t like an easy day at work? Planting evidence makes arrests easy. Cops do it all the time. The difference here is the cops don’t have to carry around contraband on their persons or in their vehicles and wait for a situation to present itself. Using powerful malware, officers can plant evidence whenever it’s most convenient for them and follow up with an arrest and device seizure that allows them access to the evidence they planted. And it’s not just for phones. The report notes that one activist arrested as the apparent result of planted evidence had his laptop compromised by police malware, allowing the Pune police to add 32 incriminating files to his hard drive. It took researchers several months to confirm attribution. The link to the police department came via a recovery email address and phone number attached to compromised email accounts. That information was traced back to a police official in Pune who somehow thought it was wise to include his full name in the bogus recovery accounts. That malware deployment has turned from passive to offensive shouldn’t come as a surprise. Very few malware developers care how their products are used and tend to make changes only when prompted by sanctions or months of negative press. And it definitely shouldn’t come as a surprise that an element of the Indian government is abusing malware to plant evidence to shut down dissent. That’s the Indian government’s main goal at this point: to force the nation’s 1.2 billion residents into subservience by any means necessary. Whether it’s a law that abuses the notion of national security to turn residents into billions of data points or the government openly targeting critics via social media services (and threatening those services with fines and imprisonment when they fail to play along), the Indian government continues to expand the size of its thumb and, with any luck, will have an entire nation under it in the near future.

Read More...
posted 2 days ago on techdirt
The 2022 Premier Adobe XD UI/UX Design Bundle has 6 courses to help sharpen your web design skills. Courses cover Premiere Pro, XD, Illustrator, and more. The bundle is on sale for $40. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Read More...
posted 2 days ago on techdirt
Over the last few weeks, we’ve written quite a bit about the American Innovation and Choice Online Act (AICOA), which has become the central push by a bunch of folks in Congress to create a special antitrust bill for “big tech.” There are some good ideas in the bill, but, as we’ve been highlighting, a major problem is that the language in the bill is such that it could be abused by politically motivated politicians and law enforcement to go after perfectly reasonable content moderation decisions. Indeed, Republicans have made it clear that they very much believe this bill will enable them to go after tech companies over content moderation decisions they dislike. Most recently, they’ve said that if the bill is clarified to say that it should not impact content moderation, that they will walk away from supporting the bill. That should, at the very least, give pause to everyone who keeps insisting that the bill can’t be abused to go after content moderation decisions. We recently wrote about four Senators, led by Brian Schatz (with Ron Wyden, Tammy Baldwin, and Ben Ray Lujan), suggesting a very, very slight amendment to the bill, which would just make it explicit that the law shouldn’t be read to impact regular content moderation decisions. In response to that Schatz letter, Rep. David Cicilline (who is spearheading the House version of the bill, while Senator Amy Klobuchar is handling the Senate side), sent back a letter insisting that Section 230 and the 1st Amendment already would prevent AICOA from being abused this way. Here’s a snippet of his letter. Moreover, even if a covered platform’s discriminatory application of its terms of service materially harmed competition, the Act preserves platforms’ content-moderation-related defenses under current law. Section 5 of S. 2992 states expressly that “[n]othing in this Act may be construed to limit . . . the application of any law.” One such law is Section 230(c) of the Communications Decency Act. Under that provision, social-media platforms may not “be treated as the publisher or speaker of any information provided by another information content provider.” They also may not be held civilly liable on account of “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Accordingly, as with other liability statutes enacted since the passage of Section 230, Section 230 provides “an affirmative defense to liability under [the Act] for . . . the narrow set of defendants and conduct to which Section 230 applies.” Another still applicable law is the First Amendment to the U.S. Constitution, which the Act does not—and indeed, cannot—abrogate. He then goes on in more detail as to why he believes the bill really cannot be abused. And while he does note that that he remains “committed to doing what is necessary to strengthen and improve the bill” and that he is happy to keep working with these Senators on it, the very clear message from his letter is that he’s pretty sure the bill is just fine as is, and that Section 230 and the 1st Amendment already protect against abuse. Finally, your proposed language for the Act—although well intentioned—is already reflected in the base text of the bill. As detailed above, among other things, section 5 of S. 2992 preserves the continued applicability of current laws, including 47 U.S.C. § 230(c), that protect social-media platforms from liability for good-faith content moderation. Although I agree that legislation is necessary to address concerns with misinformation and content-moderation practices by dominant social-media platforms, I have consistently said that this legislation is not the avenue for doing so. As such, this legislation is narrowly tailored to address specific anticompetitive practices by dominant technology firms online. And as the Department of Justice has noted, it is a complement to and clarification of the antitrust laws as they apply to digital markets. As such, it does not supersede other laws. Except… Cicilline is wrong. Very wrong. We at the Copia Institute this week signed onto a letter from TechFreedom and Free Press (two organizations that rarely agree with each other on policy issues) along with some expert academics explaining why. The letter explains why Cicilline’s faith in Section 230 and the 1st Amendment is misplaced. It walks through, step by step, ways in which motivated state AGs (or even the DOJ) might get around those concerns, by claiming that moderation decisions were not actually content-based decisions, but business conduct, focused on anti-competitive behavior. We don’t have to look far to see how that played out: the Malwarebytes case was an example of that in action. That was a case where a company was able to avoid Section 230 by claiming that a moderation decision (calling an app malware), was actually done for anti-competitive reasons. But with AICOA, we could get that on steroids. As the letter notes: There is a substantial risk that courts will extend the Malwarebytes reasoning to exclude AICOA claims from Section 230 protection—including politically motivated claims aimed at content moderation. Specifically, courts may try to harmonize the two statutes—i.e., “strive to give effect to both”—by accepting some showing of anticompetitive results as sufficient to circumvent Section 230(c)(2)(A) in non-discrimination claims. Anticompetitive animus is not required by the plain text of AICOA § 3(a)(3). Allowing only AICOA claims that allege (and, ultimately, prove) anticompetitive motivation to bypass Section 230’s protection would infer an intent requirement where Congress chose not to include one. While courts do sometimes infer intent requirements, they may reasonably conclude that doing so here would effectively read Section 3(a)(3) out of the statute. How could a platform with no direct stake in the market where competitive harm is alleged ever have an anticompetitive intent? Thus, how could any plaintiff ever bring a Section 3(a)(3) claim regarding “harm to competition” between downstream business users that would survive Section 230(c)(2)(A)? For Rep. Cicilline’s presumptions about Section 230 to be correct, courts would have to effectively render Section 3(a)(3) a nullity by holding that only claims of self-preferencing—but not discrimination between other business users—are actionable. This is an implausible reading that clearly contradicts what the present draft of AICOA says. The Malwarebytes court relied heavily on Section 230’s “history and purpose” as evincing Congressional intent to “protect competition.” Here, there is explicit statutory language and legislative history from which a court could conclude that AICOA’s purpose is to prohibit anticompetitive results, regardless of motive—and thus to carve those claims out from Section 230. This result would apparently be statutorily required if another bill co-sponsored by Sen. Klobuchar becomes law: The SAFE TECH Act (S. 299) would amend Section 230 to exempt “any action brought under Federal or State antitrust law.” There’s a lot more in the letter, but the point is clear. The idea that 230 will magically stop the abuse of this bill seems contradicted by the way the law is currently drafted, and actual cases on the books.

Read More...
posted 2 days ago on techdirt
In response to the Supreme Court’s recent assault on female bodily autonomy, numerous U.S. corporations have issued statements stating they’ll be paying for employee abortion travel. You’re to ignore, apparently, that many of these same companies continue to throw millions of dollars at the politicians responsible for turning the Supreme Court into a dangerous, cruel, legal norm-trampling joke: 1. Several companies that have announced they will cover travel costs for employees that need an abortion are financially backing a political committee openly devoted to eliminating abortion rights around the country. Follow along if interested https://t.co/Ikuht6Zhz9 — Judd Legum (@JuddLegum) June 27, 2022 With abortion now or soon to be illegal in countless states, there’s newfound concern about the privacy issues we’ve talked about for years, like how user location data, period tracking data, or browsing data can all be used against women seeking abortions and those looking to aid them… by both the state and violent vigilantes (thanks to flimsy U.S. standards on who can buy said data and how it can be used). Reporters that have tried to ask modern data-hoovering companies if they’ll do better job securing data to ensure it can’t be used against women, or if they’ll fight efforts from states hunting abortion seekers and aiders in and out of state, have been met with dead silence. Not even rote statements on how the safety of women is important, but dead silence: Multiple tech companies are saying they'll pay for employees to travel for abortions. (Employees who probably already have resources to do so unlike many Americans.) I've heard zero about how these companies intend to protect user data from being used to criminalize abortion. — Tonya Riley (@TonyaJoRiley) June 24, 2022 Motherboard asked a long line of companies including Facebook, Amazon, Twitter, TikTok, AT&T, Uber, and Snapchat if they’d hand over user data to law enforcement and not a single one was willing to commit to protecting women’s data: Motherboard asked if each will provide data in response to requests from law enforcement if the case concerns users seeking or providing abortions, or some other context in which the agency is investigating abortions. Motherboard also asked generally what each company is planning to protect user data in a post-Roe America. None of the companies answered the questions. Representatives from Twitter and Snapchat replied to say they were looking into the request, but they did not provide a statement or other response. To be fair, company legal departments haven’t finished doing the risk calculations of showing a backbone and upsetting campaign contributors and law enforcement. They’ve also got to weigh the incalculable looming harms awaiting countless women against any potential lost snoopvertising revenues, so there’s that. As public pressure grows, ham-fisted state enforcement begins, and the dynamics of the Roe repeal become harder for them to ignore, several of these companies may find something vaguely resembling a backbone in time. But the initial lack of any clarity or courage whatsoever in the face of creeping authoritarianism (and a high court gone completely off the rails) doesn’t inspire a whole lot of confidence.

Read More...
posted 2 days ago on techdirt
An interesting development in the digital world has been the continuing rise of gaming as a hugely popular activity, and a hugely profitable industry. Flowing from that rise and popularity, there is yet another fascinating aspect: streaming games for entertainment. The best-known example of this phenomenon is Twitch, now owned by Amazon. A new paper by Amy Thomas, entitled “Can you play? An analysis of video game user-generated content policies” presents one of the first in-depth analyses of the copyright aspects of this new entertainment category, and its very particular user-generated content (UGC). As she points out, copyright has trouble dealing with game streaming. Copyright applies to many aspects – the underlying software, the images, the sounds, the scripts – and yet the game streamer is not infringing on these in any meaningful way, but building on them in a playful and creative sense that is beneficial for the game studio. Game streamers – especially the best ones – act as skilled and unpaid marketers that show off all the best elements of a game, often leading spectators to try it out themselves, if they have not already done so. Thomas writes: With the slow pace of policy change and judicial interpretation by courts, it seems unlikely that the legal treatment of game UGC in copyright doctrine will change any time soon. Without intervention, this leaves UGC creators in an uneasy state of tolerated infringement, with an omnipresent threat of enforcement measures. In the face of this doctrinal gordian knot, the video games industry has responded with an alternative mechanism of regulating user creativity: contract. Now, game companies routinely consider the user who approaches a game, not as a passive consumer, but as an active creator who is interested in what rights are licensed to them to interactively create with a game. The contract is established with the End User License Agreement (EULA) that players must accept. Thomas looks at the EULAs of 30 games in order to understand how game companies are moving beyond the strict and unhelpful prohibitions of copyright to find ways to work with game streamers for mutual benefit. She explores eight aspects of game streaming that are regulated through the EULA: videos, monetization, screenshots and game photography, soundtracks, fan works, merchandise, modding, and commercial use. One of the most striking results is that a surprisingly high number of game companies allow the monetization of their game content (7 without condition, 12 with). However, monetization has its limits, in the following way: UGC policies mainly permit passive ad revenue, money gained from partnership programmes with online platforms, and fan donations. Paywalls in any form (e.g., Patreon), whilst strictly constituting ‘monetisation’, are almost universally prohibited amongst those rightsholders who attach conditions to the monetisation permission (with the exception of Mojang who allow for a 24-hour embargo of paywalled content). As such, it may be more accurate to define monetisation as a user’s entitlement to derive passive income from their UGC, but not the active solicitation of money from other users at the point of access. In this sense, monetisation of UGC is not transactional, but rather merit-based; other users may reward the creator of UGC with their time, subscription, or donation, but cannot be actively charged to access the content. “Merit-based monetization” is a great way to describe patronage of the kind that true fans can provide. As previous posts on this blog have suggested, it represents one of the best alternatives to a copyright system that isn’t working for the digital world. The new research about game streaming from Thomas confirms both of those aspects. Follow me @glynmoody on Twitter, Diaspora, or Mastodon. Originally posted to Walled Culture.

Read More...
posted 3 days ago on techdirt
Ten states are currently home to a version of California’s “Marsy’s Law.” This law is a “victim’s rights” law, named after a California murder victim. It was written with the intent of involving crime victims in the criminal justice process, giving them a “right” to be heard during court proceedings, choose their own representation (rather than be solely represented by the prosecution), and — as is most relevant here — prevent crime victims’ names from being released publicly. That’s where these laws have become convenient for cops. When cops deploy excessive force (including killing people), the person subjected to police violence is often hit with criminal charges. Resisting arrest is a popular one. So is “assaulting an officer,” which may mean nothing more than a person bumped into an officer while being detained. Since those are criminal charges, the cops turn themselves into victims, despite having performed far more violence than the person they restrained (to death, in some cases). States where victim rights laws are in force allow officers to prevent their names from being published by media covering deadly force incidents. Since the cops are nominal “victims,” the law applies to them. A law enforcement officer in South Dakota used the state’s law to keep their name out of the papers following their shooting of driver during a traffic stop. The same thing happened in Florida a few years later. Two cops who deployed deadly force were able to convince a judge the state’s Marsy’s Law applied to them — even superseding the public’s right to this information through the state’s public records laws. It has happened again. Same state, same law, same outcome. Here’s Scott Shackford for Reason: In Sarasota County, three deputies were sent to a condo in April to help evict 52-year-old Jeremiah Evans. According to Sarasota County Sheriff Department’s report, Evans pulled out a knife and threatened the deputies. One of the deputies shot and killed Evans. Prosecutors determined that the shooting was justified. The Sarasota Herald-Tribune submitted a public records request to the State Attorney’s Office, and among the information they received were the unredacted last names of the deputies involved. Then the Sarasota County Sheriff’s Office swung into action, going to a judge to invoke Marsy’s Law to try to prohibit the newspaper from publishing the names of the officers involved. On Friday evening a judge granted a temporary injunction preemptively prohibiting the newspaper from publishing the officers’ names. Despite failing to redact the names by accident, the State Attorney’s Office also supported the sheriff’s department and joined the action against the newspaper, essentially attempting to shift responsibility onto the newspaper for the office’s own supposed breach of the law. The Herald-Tribune, which had already obtained some of this information (last names only) from the state attorney’s office, is rightfully upset at this turn of events. It has filed a motion in opposition to this injunction — one secured by both the Sheriff’s Office and the state attorney — pointing out that this is an unjustified abuse of the victim’s rights law in hopes of memory-holing information already provided to the paper. In the newspaper’s motion, attorneys said nothing in Marsy’s law creates a private right of action against third parties or empowers courts to “censor private persons, such as respondents.” If disclosure of the deputies’ names violated Marsy’s Law, the motion argues, the violator was the State Attorney’s Office, not the newspaper.  “Petitioners cite no case law that places Marsy’s Law above the free-speech guarantee in Article I, Section 4 of the Florida Constitution. And any reading of Marsy’s Law that prohibits the news media from publishing publicly disclosed information also would bring Marsy’s Law into conflict with the United States Constitution,” the motion states. First and foremost, the law cannot be used to stuff the genie back into the bottle. The newspaper already has access to the involved officers’ last names, thanks to a public records response by the state attorney’s office. The emergency injunction does not prevent the paper from publishing information it already has because the public release, as the paper points out, was performed by the state attorney. Second, the injunction process appears to have abandoned the concept of due process entirely. It was obtained by the sheriff and state attorney with zero opportunity for input from the party directly affected by the injunction. The paper was not notified the injunction was being sought and was not informed of law enforcement’s efforts until after the order was secured. And it was obtained on Friday evening at 6:30 pm, presumably to maximize the length of the questionably obtained opacity, preventing the paper from engaging in any challenge of the order until the following Monday. This certainly isn’t the way those writing these laws expected them to be used. But that’s what these laws enable when they’re abused by public employees who deploy deadly force: a larger gap between state law enforcement officers and the already distant accountability that rarely serves to deter future misconduct.

Read More...
posted 3 days ago on techdirt
For at least three years now, we have been discussing the goings on concerning a trademark application submitted by Ohio State University for using the word “the” on apparel. If your brain just came to a screeching halt, it may be because you’re not a college sports fan. See, Ohio State University absolutely loves referring to itself as The Ohio State University. Part of the tradition is for athletes who go on to have professional careers always announce their college affiliation by really leaning into the word “the”. Even college sports commentators think it’s all very stupid and the USPTO initially rejected the trademark application based largely on technical grounds. Which was curious, because technical grounds aren’t the largest issue here. The USPTO should have rejected the application based on the notions that the word “the” is one of the most commonly used words in the English language and therefore shouldn’t get trademark protection, not to mention that a shirt with the word “the” on it does absolutely nothing to inform the public that that shirt is an OSU product. But OSU pushed for the trademark in yet another application… and the USPTO somehow decided to grant the mark. The U.S. Patent and Trademark Office approved Ohio State’s application Tuesday by issuing a registration certificate. It allows Ohio State to control the use of “THE” on “clothing, namely, t-shirts, baseball caps, and hats; all of the foregoing being promoted, distributed, and sold through channels customary to the field of sports and collegiate athletics,” the certificate reads. You can see the absurd certificate in the link. It looks hilarious, with just the word “the” at the top. Except that none of this is actually funny. Why? Well, because the USPTO’s actions now mean that nobody else can make any athletic apparel, hats, or other clothing consisting solely of the word “the”. And while very few people or companies actually do that, they certainly should be allowed to. Because it’s the word “the”. No matter how annoying OSU has been with its silly little tradition, the word “the” on clothing is not identified with OSU. Or any other entity. Because it’s just the word “the”. And the USPTO really, really should know better.

Read More...
posted 3 days ago on techdirt
Last month, we noted that there was a new “protect the children” bill that was proposed in the EU that would effectively outlaw encryption, while simultaneously require full internet scanning of basically all activity. As we noted in our post, it was still early in the process, and now the German government has stepped up to say that this proposed regulation is a terrible idea and would devastate basic human rights. That’s exactly right. The German government in the past weeks repeatedly slammed the bill as an attack on privacy and fundamental rights, with its digital minister Volker Wissing warning this week that the draft law “crosses a line.” In response, the EU Commissioner who is championing the proposal tried to insist that the proposal is much more narrow than people are making it out to be, but that’s wrong. It’s based on the faulty assumption that you can magically keep end-to-end encryption while simultaneously be able to scan messaging communications for certain content. That’s not possible. Hopefully that puts a quick end to this proposal, but I fear it will keep popping up quite a bit over the next few years.

Read More...
posted 3 days ago on techdirt
The “Miranda rights” established by the Supreme Court in 1966 are a little less guaranteed going forward. The Supreme Court has issued an opinion [PDF] that limits what citizens whose rights have been violated can do — limiting them to exercising these rights during criminal trials as a component of their Fifth Amendment rights. The Miranda warning mandated by the Supreme Court is supposed to prevent arrestees from being deprived of legal representation during questioning or exercising their Fifth Amendment right to remain silent. Any statements made in lieu of the reading of these rights (and the affirmative waiving of these rights by arrestees) are supposed to render statements made without warning/respect for these rights unusable in court. Many times this isn’t the case. The un-Mirandized statements survive dismissal attempts and result in people being convicted despite their rights being violated. When consequent challenges (at the appellate level, etc.) reveal the statements were made without respect or notification of these rights, citizens have usually been able to file civil rights lawsuits alleging violations of their Fifth Amendment rights under the Miranda decision. That is no longer the case. The Supreme Court (in a ideologically split 6-3 decision) has declared suing over violated Miranda rights is no longer an option. Here’s the ACLU’s summary of the decision: Today, in Vega v. Tekoh, the court backtracked substantially on its Miranda promise. In Vega, the court held 6-3 (over an excellent dissent by Justice Elena Kagan) that an individual who is denied Miranda warnings and whose compelled statements are introduced against them in a criminal trial cannot sue the police officer who violated their rights, even where a criminal jury finds them not guilty of any crime. By denying people whose rights are violated the ability to seek redress under our country’s most important civil rights statute, the court has further widened the gap between the guarantees found in the Bill of Rights and the people’s ability to hold government officials accountable for violating them. The Supreme Court says the Miranda ruling was nothing more than something meant to encourage law enforcement officers to respect Fifth Amendment rights. Even if they fail to do so, it doesn’t mean they should be sued for rights violations. In Miranda, the Court concluded that additional procedural protections were necessary to prevent the violation of the Fifth Amendment right against self-incrimination when suspects who are in custody are interrogated by the police. Miranda imposed a set of prophylactic rules requiring that custodial interrogation be preceded by now-familiar warnings and disallowing the use of statements obtained in violation of these new rules by the prosecution in its case-in-chief. Miranda did not hold that a violation of the rules it established necessarily constitute a Fifth Amendment violation. That makes sense, as an un-Mirandized suspect in custody may make self-incriminating statements without any hint of compulsion. Maybe so. But that’s the entire point of the Miranda ruling. Law enforcement is supposed to make people aware of their rights so they don’t make self-incriminating statements under the mistaken belief they have no other option but to start talking while in police custody. The “prophylactic” is supposed to shield people from law enforcement abuse of their rights, but this decision encourages abuse by limiting the possible negative outcomes of Miranda rights violations. This is something law enforcement already routinely abuses. Cops will question people in their homes, cars, driveways, places of work — all under the legal assumption that a person surrounded by officers (but not actually locked in an interrogation room) is somehow “free to go.” Even when they do Mirandize people, they do everything they can to subvert these rights to avoid having to deal with lawyers or arrestees who now realize they don’t have to say a damn thing while being questioned. This decision means some rights are more equal than others. You can still file a Section 1983 lawsuit against officers for violating other rights (Fourth, First, Eighth, and Fourteenth are the most common) but you can’t sue under certain elements of the Fifth Amendment. The facts of the case undercut this conclusion. Here’s a very concise summary of the events leading to this lawsuit, which started when law enforcement arrested Terence Tekoh for allegedly sexually assaulting an immobilized female patient at a Los Angeles hospital: Carlos Vega, a Los Angeles County sheriff deputy, questioned Tekoh, although he failed to read him his rights as required by the 1966 precedent of Miranda v. Arizona, where the court held that a defendant must be warned of a “right to remain silent.” Under that precedent, without the Miranda warning, criminal trial courts are generally barred from admitting self-incriminating statements made while the defendant was in custody. Tekoh ultimately confessed to the crime, was tried and acquitted — even after the introduction of his confession at trial.  This decision limits the remedy for Miranda violations to the suppression of evidence during trials — something that did not happen here. The prosecution was able to convince the trial court Tekoh’s statements were voluntary, even if the officers never informed Tekoh of his rights. The dissent (written by Elena Kagan) points out the majority is overriding its own precedent and claiming there’s no inherent rights violations in interrogating someone who hasn’t been informed of their rights. The Supreme Court now pretends Miranda rights are not constitutional rights, despite stating otherwise several times. Begin with whether Miranda is “secured by the Constitution.” We know that it is, because the Court’s decision in Dickerson says so. Dickerson tells us again and again that Miranda is a “constitutional rule.” 530 U. S., at 444. It is a “constitutional decision” that sets forth “‘concrete constitutional guidelines.’” Id., at 432, 435 (quoting Miranda, 384 U. S., at 442). Miranda “is constitutionally based”; or again, it has a “constitutional basis.” 530 U. S., at 439, n. 3, 440. It is “of constitutional origin”; it has “constitutional underpinnings.” Id., at 439, n. 3, 440, n. 5. And—one more—Miranda sets a “constitutional minimum.” 530 U. S., at 442. Over and over, Dickerson labels Miranda a rule stemming from the Constitution. But not anymore, the majority has unilaterally declared. Now it’s just a “prophylactic” meant to protect people from rights abuses. When it fails to do so, the Supreme Court says there’s no rights violation, which means no one can sue over these specific violations. The Fifth Amendment isn’t stricken from the litigation books, but it is damaged by the court’s decision to make Miranda rights violations exempt from civil rights lawsuits. Today, the Court strips individuals of the ability to seek a remedy for violations of the right recognized in Miranda. The majority observes that defendants may still seek “the suppression at trial of statements obtained” in violation of Miranda’s procedures. Ante, at 14–15. But sometimes, such a statement will not be suppressed. And sometimes, as a result, a defendant will be wrongly convicted and spend years in prison. He may succeed, on appeal or in habeas, in getting the conviction reversed. But then, what remedy does he have for all the harm he has suffered? The point of §1983 is to provide such redress—because a remedy “is a vital component of any scheme for vindicating cherished constitutional guarantees.” Gomez v. Toledo, 446 U. S. 635, 639 (1980). The majority here, as elsewhere, injures the right by denying the remedy. The (occasional [it didn’t even happen in the case triggering this SCOTUS review!]) suppression of evidence may derail a few prosecutions. But it won’t do anything to encourage cops to ensure the people they question are apprised of their rights under the law. If anything, it will encourage officers to keep detainees and arrestees in the dark, knowing they can’t be directly sued for refusing them access to counsel or pretending these rights don’t exist to coerce people into confessions. The decision is pure cognitive dissonance: one that says un-Mirandized statements are a rights violation when submitted as evidence during trials but not a rights violation when the falsely accused/arrested/convicted bring lawsuits against officers.

Read More...
posted 3 days ago on techdirt
The Basics of Python Programming course will help you explore both basic and intermediate concepts of the Python programming language – the world’s most popular programming language. Learn how to write Python programs from scratch with a hands-on approach where you will be able to develop your skills and knowledge as a Python developer. The 4-week course is free. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Read More...