posted about 8 hours ago on techdirt
This week, our first place winner on the insightful side is Stephen T. Stone with a comment about Trump's attempts to evade a copyright lawsuit: Ah, it’s the thing Trump fears most: consequences for his actions. In second place, it's Blake C. Stacey with a quoted update for our post about Trump's social network and its possible violation of Mastadon's license: Talking Points Memo has a story on it this morning: “I do intend to seek legal counsel on the situation though,” Rochko told TPM, while declining to discuss any specific legal action he may be contemplating. “Compliance with our AGPLv3 license is very important to me as that is the sole basis upon which I and other developers are willing to give away years of work for free,” Rochko added. For editor's choice on the insightful side, we start out with an anonymous response to someone stupidly trying to claim the polio vaccine was harmful due to rare side effects: Wild polio causes paralysis in 0.1 to 0.5% of infected so even there the vaccine was an improvement over catching the wild versions. Taking isolated papers and reports, and especially those that justify an anti vaccine stance, is not the way to evaluate a vaccine, you need to read much more widely. Next, it's That One Guy with another comment about Trump's defenses in the copyright lawsuit: Well if you insist... If he wants to argue that presidential immunity applies in this case then it certainly seems like after the judge laughs that argument out of court any fines for infringement should be levied against him personally and shouldn't be dumped on the campaign, since he just claimed that he was directly responsible and involved with the ad and use of music. Over on the funny side, our first place winner is Chris Brand with a low blow at Missouri over the governor's ongoing attacks on the journalists who "decoded" some HTML: Is the ability to read so rare in Missouri that it gets called "decoding"? In second place, it's wshuff with a comment about the launch of Truth Social: Time for The Lincoln Project to start up Consequences Social. For editor's choice on the funny side, we start out with some sarcasm from That One Guy in response to a report about the dangers of client-side scanning: Totally and absolutely unbelievable I find it difficult to take the article seriously as it seems to be based upon a flawed premise, namely that governments would ever ask for more once they've got what they wanted. I mean really, I'm sure once they have one company scanning for a particular kind of content they'll be perfectly content with that, what kind of greedy, self-serving government would take advantage of the new door Apple just provided them to ask for even more? Finally, it's That Anonymous Coward with a comment about Canon disabling scanning and faxing functions on printers that run out of ink: "These precautions are in place to prevent damage to the car from occurring if window wiping with no fluid is attempted. The car uses the fluid to wet the windshield during the driving process. If no fluid is present, the windsheild could be damaged or the car would require service." - On why your car radio refuses to work. That's all for this week, folks!

Read More...
posted 1 day ago on techdirt
Five Years Ago This week in 2016, we followed up on the previous Friday's ridiculous arrest of Amy Goodman for covering the North Dakota oil pipeline protests, with prosecutors changing their charges from trespassing to the even more ridiculous charge of rioting, only to have them rejected by a judge. An appeals court ruling confirmed what everyone knew about NSA surveillance — that it could be used to investigate domestic suspects — while a tribunal in the UK determined that intelligence agencies there had been illegally collecting data in bulk for more than a decade. Comcast was sued for misleading feeds that it claimed were about "transparency", T-Mobile was fined by the FCC for abusing the definition of "unlimited" data, and the FTC was warning that AT&T's court victory on throttling could screw over consumers for decades to come. Meanwhile, Team Prenda suffered yet another huge loss with an order to pay over $650,000 for a bogus defamation lawsuit. Ten Years Ago This week in 2011, copyright troll lawyer Evan Stone was appealing a judicial slapdown and sanctions, another mass infringement lawyer was complaining about the number of people fighting back, and Righthaven was still trying to avoid paying legal fees (though the court wasn't having it) while also facing an imminent dismissal in yet another lawsuit. Ron Wyden was continuing to point out the problems with PROTECT IP while we took a look at the connection between that bill and Wikileaks censorship. This was also the week that we first wrote about the birth of CreativeAmerica, the latest astroturf organization from the entertainment industry. Fifteen Years Ago This week in 2006, Belgian newspapers were doubling down on their "victory" in getting delisted by Google with demands to be removed from MSN as well, while a News.com editor was using the fight as a springboard for a ridiculous column about how Google is "immoral". Mostly, though, things were shaking out regarding Google's YouTube acquisition: it was causing turbulence for Google's existing advertising deals, there was a revelation that YouTube had given equity to record labels on the morning of the deal, and we noted that attacks from politicians might be an even bigger deal than attacks from the entertainment industry (and Universal Music chose this week to sue a bunch of other video sites instead). Meanwhile, the Authors Guild lawsuit over Google's book scanning was getting off to a very, very slow start.

Read More...
posted 2 days ago on techdirt
How much of a violation needs to take place before it's a Constitutional violation? It's a trick question, at least in the hands of the right judge. With the wrong judge, a minimal violation is considered excusable, or at least salvageable by any number of Fourth Amendment exceptions. But with the right judge, any Fourth Amendment violation is a Fourth Amendment violation, no matter how small or how fleeting it is. That's how we get to this decision [PDF], handed down by the Supreme Court of Idaho, which not only calls on cops to do better with their drug dog handling, but also tips the hat to recent decisions involving parking enforcement measures. (via FourthAmendment.com) Here are the facts of the case: In March 2019, police officers stopped Howard for a traffic violation and took him into custody after discovering an outstanding warrant for his arrest. Officers then brought in a drug-sniffing dog (“Pico”) to sniff the exterior of the car. Pico alerted to the presence of illegal drugs, and a subsequent search of the car uncovered methamphetamine, heroin, and drug paraphernalia. Neither Howard nor his passenger was the registered owner of the vehicle, and police contacted the owner who took possession of the vehicle at the scene. After prosecutors charged Howard with drug trafficking offenses related to the heroin and methamphetamine, Howard moved to suppress all evidence arising from the search of the car. During the hearing on the motion, Howard argued Pico momentarily put his nose through the open window of the car before giving his final, trained response to indicate the presence of illegal drugs, and that this was a trespass constituting an unlawful search in violation of his Fourth Amendment rights under United States v. Jones, 565 U.S. 400 (2012). The only witness testifying at the hearing was Officer Amy Knisley, Pico’s handler. A portion of Knisley’s body camera footage showing the dog sniff was also admitted into evidence. The district court was fine with Pico's momentary intrusion and denied the motion to suppress. It said that because the sniff was of the dog's own volition, it couldn't possibly have been a rights violation. The district court denied Howard’s motion to suppress because it found the Court of Appeals opinion in State v. Naranjo, 159 Idaho 258, 359 P.3d 1055 (Ct. App. 2015), was controlling. In Naranjo, the Court of Appeals held that a drug dog’s sniff through the open window of a vehicle had been “instinctual”—as opposed to facilitated or encouraged by the police—and therefore was not a “search” for the purposes of the Fourth Amendment. The challenge of the search pointed to the Supreme Court's decision in Jones, which found intrusions -- however minimal -- into private property were unconstitutional without a warrant or any applicable warrant exception. In that case, officers placed a tracking device on a parked car. That minimal intrusion (in service of a greater, more extended intrusion) was impermissible. Idaho's Supreme Court agrees with the defendant. Jones is controlling here. The intrusion may have been minimal but it was still an intrusion. We agree with Howard that Naranjo is inconsistent with Jones and that Pico’s entry was a search. Jones is clear that for purposes of the Fourth Amendment, a search occurs when the government trespasses in order to obtain information. Then it points to a more recent Appeals Court decision that dealt with another form of minimal intrusion. Though not squarely on point, and certainly not binding on this Court, we find that the Sixth Circuit Court of Appeals decision in Taylor v. City of Saginaw is instructive. In Taylor, the city enforced time limits for parking by tire chalking, i.e., placing chalk marks on the tread of car tires—marks that rub off as soon the cars are moved—to determine whether the cars have remained in place longer than allowed. The plaintiff, apparently a frequent recipient of parking tickets, alleged that the practice violated her Fourth Amendment rights. The city responded, in part, by arguing that chalking was not a search for purposes of the Fourth Amendment. The Sixth Circuit disagreed. It held that chalking, though a slight interference with private property, was nevertheless an interference for the purpose of obtaining information and therefore a “search” under Jones. This was the same conclusion a California federal court reached last spring. A tire mark is a search. And, if that's upheld on appeal, there will be controlling precedent in Idaho (the Ninth Circuit, which also covers California) that aligns with the findings here. And that finding is that it isn't the means or methods or length/depth of the intrusion. It's the intrusion that matters. Like the marking of chalk on a car tire’s tread, a dog’s nose passing through an open window is a minimal interference with property. But the right to exclude others from one’s property is a fundamental tenet of property law, and we see no room in the Jones test for a de minimis exception. That's the baseline. And the court says the government can't save its search by claiming the drug dog was in the process of alerting prior to the intrusion into the vehicle. The officer's testimony stated that the dog had not shown a "final" alert prior to sticking its nose through the window. Only after that did the dog sit, something the officer said was an "alert." When the statements of Officer Kinsley’s belief are excluded from our consideration of her testimony, these are the facts that remain: (1) Pico is a certified drug dog trained to sit or lie down to indicate the presence of drugs; (2) Pico did not sit or lie down before entering the car; (3) at least sometimes Pico “freezes” or tries to “cheat the system” by looking at the officer for his reward before indicating as he has been trained to do; (4) Pico froze and looked back at the officer before entering the car. From these facts, we cannot know whether Pico’s freezing and looking back was a reliable indication that narcotics were present, and we cannot determine whether Officer Kinsley’s subjective belief was objectively reasonable. For instance, how often does Pico freeze or look back at the officer before giving a final, trained alert? Does Pico only freeze when in odor? Does Pico only try to “cheat the system” when narcotics are present? That's the problem with four-legged probable cause. It's mostly up to the officer interpreting the dog's acts. And, without the benefit of dashcam or body camera recordings, these subjective takes become part of the official record and are difficult to challenge. This recounting of events raises enough questions about the dog's actions that the court is unwilling to call any of what's described above "probable cause." This decision says the government can't have the evidence it obtained with the aid of an intrusive canine. And that means it can't have its conviction either. Going forward, cops in Idaho are going to need actual probable cause -- not just inconsistent dogs -- before searching people's cars during traffic stops.

Read More...
posted 2 days ago on techdirt
Eddy Grant, responsible for the banger Electric Avenue, has made it onto our pages a couple of times in the past, most recently over a copyright spat with Donald Trump. At issue in the lawsuit was the Trump campaign sending around a video of a "Trump/Pence" train zipping by, with a Biden hand-car chugging behind it. While there were lots of references to Biden sniffing people's hair (seriously, what is that?) and other silly jabs, the real problem is that the entire video has Electric Avenue playing as its soundtrack. Eddy Grant didn't like this, of course, and sued over it. Trump tried to get the suit tossed on fair use grounds, arguing that the use of the song was transformative... but that isn't how it works. Simply using the song in a way the author didn't intend doesn't make the use transformative. Were that the case, every commercial advertisement out there would feature copyrighted songs as backgrounds to selling all manner of things. Again, not how it works and the court refused to toss the suit in response to Trump's Motion to Dismiss. And so now this whole case moves forward and Trump is once again asserting fair use in his answer to the complaint... but with a twist! More on the twist in a moment, but first the fair use argument. Former President Trump denied Eddy Grant's copyright infringement claims in a formal response submitted to the court late Monday night. "Defendants deny that they have willfully and wrongfully infringed Plaintiffs' copyrights," the response said. "Plaintiffs' claims against Defendants are barred, either in whole or in part, by the doctrines of fair use and/or nominative use." So pretty much the same fair use argument that was made in Trump's initial motion to dismiss (embeded below). This argument almost certainly won't work. And, while I don't find myself arguing against fair use very often, this one doesn't make a whole lot of sense. The video used a significant portion of the song and the song was used in nearly the entire video in question. And, while Trump asserted the video was parody, it's not parody of Electric Avenue. That's the point of the parody defense: the use of a work in order to satirize it. That isn't what's happening here. The target of the satire is Joe Biden, not Eddy Grant or his song. It seems like Trump's legal team might realize that argument is a loser as well, given that the added twist I mentioned earlier. The former president also asserted Grant cannot sue him because of what Trump's attorneys called "Presidential absolute immunity." So, here's the thing: someone really needs to get Donald Trump in a room, sit him down, and explain to him that he cannot simply shout "presidential immunity!" every time something in his life doesn't go the way he wants to make it magically go away. This immunity claim is something he's using with wild abandon, including in far more serious realms like in denying requested documents for the January 6th committee. But this is far more absurd. It wasn't Donald Trump, the President, that put out this video. Rather, it was the Donald Trump campaign that did so and that campaign very much does not qualify for presidential immunity, "absolute" or otherwise. Immunity for presidents from prosecution or suit typically ends when that person is no longer president and, last time I checked, the subject of the mockery in the video is president now, not Donald Trump. "Given the court's recent favorable determination, there are very few issues that remain to be resolved. We are confident that our clients' rights will ultimately be fully upheld and look forward to Mr. Trump fully explaining his actions," Grant's attorney, Brian Caplan, said in a statement provided to ABC News. That's the sound of a lawyer quite confident in his case. And it's frankly quite hard to argue with him.

Read More...
posted 2 days ago on techdirt
Yeah, it can suck when you fail to handle FOIA requests properly and give the public more information than you intended to. It sucks for the government. It doesn't suck for the public, which is rarely treated to anything more than the most minimal of transparency. Unfortunately, government agencies don't always react well when they've screwed things up. Sometimes the blowback is limited to ineffectual shouting or paper waving. Sometimes, however, it's a lawsuit seeking a court order to prevent people from accessing (or sharing) documents they've legally obtained from a government agency. Cut to Virginia, where it's the latter option being deployed: A Virginia school board is suing two mothers, arguing that documents "inadvertently and mistakenly" released through a Freedom of Information Act request and shared online included confidential information. The Goldwater Institute on Thursday filed a motion with a Virginia judge to dismiss a lawsuit filed by the Fairfax County School Board against Debra Tisler, who obtained documents from the board through a Freedom of Information Act request, and Callie Oettinger, who shared the redacted documents on her website. The lawsuit [PDF] claims the Fairfax County Public School Board never meant to release the information it released, which included personal information about students. Federal law forbids the release of this information to unauthorized parties by government agencies. But that means nothing in the context of this lawsuit. The School Board can be held liable by others for releasing this information. The recipients of this information did nothing wrong, despite the litigious protestations otherwise. The complaint is mostly a list of what the Board did wrong, including failing to subject the FOIA release to review by its legal counsel before sending a link to the Dropbox file to the records requesters. To correct this, the Board repeatedly contacted the recipient. And it was continually ignored… up until it sent multiple physical notifications, at which point the recipient of all of these notifications told the School Board to stop harassing her. Copies of these documents were posted publicly, but sensitive student data was redacted by the recipients. The Board felt this wasn't enough of a capitulation, so it took legal action, which then resulted in the removal of the files from the recipient's website. The Board claims in its filing that it has a legal right to go back in time and undo its mistakes by forcing the FOIA requesters to basically pretend they never received the unredacted information. The Goldwater Institute has stepped in to represent the records requesters and its opposition motion [PDF] points out just how wrong the Board is about the law and the First Amendment. Only the most pressing government interest—such as the publication of troop movements during wartime—can justify the imposition of such a restraint. Id. at 726 (Brennan, J., concurring). But no such interest is identified in the board’s Complaint or its motion for an injunction. On the contrary, the sole bases it asserts for blocking Ms. Oettinger and Ms. Tisler from disseminating the information are the fact that the board could have chosen to withhold some of this information under the VFOIA (though it did not do so), and that some of the documents could be covered by attorney-client privilege between the board and its attorneys. Complaint ¶¶ 40, 44. That is constitutionally insufficient and irrelevant. The Board's demands are unconstitutional and there is no precedent that says otherwise. They are government records, lawfully obtained, and Ms. Tisler and Ms. Oettinger have a right to disseminate them, as protected by the rule of Smith, New York Times, and Bartnicki. Even if the documents were inadvertently turned over, they have both a constitutional right and a legitimate democratic purpose for publishing them. For the government to demand that the documents be removed from publication—i.e., censored—is contrary to all constitutional precedent. None of that precendent appears to matter to the court. It has already granted the Board's injunction. Last week, a state judge issued an order barring the women from sharing the documents pending further order of the court, and Oettinger subsequently took the documents off her website. Hopefully now that an adversarial party has entered the legal battle, the court will be forced to reconsider its granting of this injunction. The government should not be allowed to use courts like time machines to erase its mistakes. It should have to live with them, especially when the inadvertently-released documents deal with issues of public interest, like public school spending. The Board's arguments are mostly admissions of wrongdoing on its own part for which it should be held accountable. Instead it has asked the court to punish people who've done nothing wrong.

Read More...
posted 2 days ago on techdirt
Of all the places to come across illegal facial recognition tech deployment, a convenience store chain is certainly one of the strangest. The tech wasn't deployed to stop shoplifting or keep unwanted people off the premises. Instead, somewhat ironically, it was deployed to help 7-Eleven convenience stores quantify how well it was doing in the customer service department. Here's Campbell Kawn for ZDNet (via Slashdot): In Australia, the country's information commissioner has found that 7-Eleven breached customers' privacy by collecting their sensitive biometric information without adequate notice or consent. From June 2020 to August 2021, 7-Eleven conducted surveys that required customers to fill out information on tablets with built-in cameras. These tablets, which were installed in 700 stores, captured customers' facial images at two points during the survey-taking process -- when the individual first engaged with the tablet, and after they completed the survey. After becoming aware of this activity in July last year, the Office of the Australian Information Commissioner (OAIC) commenced an investigation into 7-Eleven's survey. The investigation [PDF] says 7-Eleven handled pretty much everything about this badly. It also shows the company tried to distance itself from its own tablet-based survey by blaming the third-party vendor handling the survey on its behalf. The facial images were collected twice during the survey and stored locally on the tablets for about 20 seconds. After that, they went to the third party's servers, where they were processed and converted into an algorithmic representation of the face. The original images were then deleted from the device used to perform the survey. These "representations" were then used to check for matches on other surveys. This was done to detect any potential gaming of the system by individuals repeatedly performing surveys and to make guesses about the age and gender of survey takers. All of that data was deleted after seven days. In total, 1.6 million surveys were performed. 7-Eleven argued this was not a violation of Australian law because the images were not used to identify, track, or monitor respondents. It also said it had no access to facial images on the local device, nor any access to images once they had been moved to the third party servers. Wrong, says the information commissioner. The problem isn't how the collected information was handled. The problem is how it was collected. 7-Eleven needed consent from survey takers and didn't get it. The commissioner found "no evidence" individuals "expressly" agreed to have their biometric information collected by 7-Eleven. 7-Eleven argued it did get at least implied consent. As evidence of this it offered the blanket notice displayed in front of all stores: Site is under constant video surveillance. By entering the store you consent to facial recognition cameras capturing and storing your image. It also pointed to its privacy policy on its website -- something survey takers weren't presented with when taking surveys. 7-Eleven may also collect photographic or biometric information from users of our 7-Eleven App and visitors to our stores, again, where you have provided your consent. 7-Eleven collects and holds such information for the purposes of identity verification. None of this is sufficient, says the commissioner. Consent may not be implied if an individual’s consent is ambiguous or there is reasonable doubt about the individual’s intention. While I accept that use of the tablet was voluntary, I am not satisfied that the act of using the tablet unambiguously indicated an individual’s agreement to collect their facial image and faceprint, in circumstances where: There was no information provided on or in the vicinity of the tablet, or during the process of completing the survey, about the respondent’s collection of facial images and faceprints. The Store Notices were unclear, and, given the prevalence of these kind of notices in stores and public places, may have created an impression that the respondent captured customers’ images using a facial recognition CCTV camera as part of surveillance of the store. The respondent’s Privacy Policy did not link the collection of photographic or biometric information to the use of in-store ‘feedback kiosks’. Non-specific blanket statements about possible collections are not the same thing as informing survey takers prior to taking a survey that their biometric information will definitely be collected if they fill out a survey. That's some lawbreaking right there. The company that processed the facial images on behalf of 7-Eleven is ordered to destroy all faceprints collected by this survey. It's also forbidden from engaging in this sort of thing again without securing explicit permission from clients' customers. How much of a deterrent this is remains to be seen since the third party already declared all facial recognition data was deleted seven days after it was collected and processed. The greater benefit of a ruling like this -- especially one that deals with information gathered irresponsibly but apparently handled with more care once it was harvested -- is the official reminder it sends to all Australian entities that may currently believe a link to a privacy policy buried on the bottom of a corporation's website home page is all that's needed to obtain "consent" for collection of personal info.

Read More...
posted 2 days ago on techdirt
Hey Missouri: stop electing technically illiterate dipshits. First you had Claire McCaskill, one of the key sponsors of FOSTA (who is still defending it years later). You got rid of her, but replaced her with Josh Hawley, who seems to think his main job in the Senate (besides whipping up support for insurrectionists and planning his run for the Presidency) is to destroy the internet and reshape it according to his own personal vision. And then there's your governor. We wrote about him a few years ago when he claimed (ridiculously) that the 1st Amendment meant he could withhold public records (which is not how any of this works). But, of course, last week, his tech ignorance broke into prime time after the St. Louis Post-Dispatch ethically disclosed that the state's Department of Elementary and Secondary Education (DESE) website was including teacher & administrator social security numbers in the HTML. DESE pulled down the pages, but not before calling the journalists "hackers." Parson then doubled down and called for the journalists to be prosecuted. And then kept insisting that viewing HTML source code was hacking. For the past week people on Twitter have been repeatedly mocking Parson for this, but he just won't give up, and neither will the United Missouri PAC that is a huge Parson supporter and was even fined last year by the Missouri Ethics Commission over improper contributions and failure to report the contributions to Parson. Earlier this week, United Missouri seemed to think that Parson's blatant technical illiteracy was worth doubling down on and turning into a culture war against "the fake news." It produced a video that is so embarrassing and cringeworthy it feels like a parody. I mean, the transcript is so stupid that it makes me wonder about the quality of education in Missouri that someone could be this clueless. The latest from the Missouri "fake news factory" is from the St. Louis Post-Dispatch, where a reporter has been digging around HTML code on a state website. The state technology division said the hacker took the records of at least 3 educators, decoded the HTML source code and viewed the social security numbers from the state website. I mean, holy shit. HTML code is public. That's what "view source" is there for. There's no "digging around." And, incredibly, here United Missouri/Parson are admitting that the social security numbers were in HTML! THAT IS THE PROBLEM! No one should ever be putting SSNs in HTML. The fact that DESE put SSNs in HTML is the very problem that the reporters were highlighting. And if it wasn't actually a problem, why did DESE pull down the website in the first place? It's not hacking. It's showing that Parson's administration is incompetent. And then, the video takes Parson's own failure to protect teachers and administrators in the state... and blames it on the reporters who (ethically) disclosed this negligent coding? Governor Parson believes everyone is entitled to their privacy. Especially our teachers. THEN WHY DID YOUR ADMINISTRATION REVEAL THEIR SOCIAL SECURITY NUMBERS IN HTML, YOU TECHNICALLY IGNORANT FOOLS? No one should ever be putting SSNs in HTML. The fact that they were there is the problem. Not the fact that these reporters alerted the state to their own coding (and data handling) error. The privacy breach is the state's fault, not the reporters. The reporters disclosed all of this in the most ethical manner possible: alerting the state and not publishing anything until after the leaked data was removed from the web. Governor Parson is standing up to the fake news media and is committed to bringing to justice anyone who obtained private information. The St. Louis Post-Dispatch is purely playing politics. Exploiting private information is a squalid excuse for journalism. And hiding behind the noble principle of free speech to do it is shameful. Note that they keep calling the St. Louis Post-Dispatch "fake news" but don't dispute a single thing they reported. So it's fake news, but also a crime? Furthermore, the only one who should be "brought to justice" is the state for putting social security numbers in HTML in the first place. And the only one "purely playing politics" appears to be Governor Mike Parson and his corrupt PAC. And, of course, everyone with even the most basic understanding of HTML know that it's Parson who's full of shit here, as is clear from all the comments on the video: I get that, these days, the Trumpian populists politicians think they can just make shit up and lie constantly and their ignorant base will lap it up, but this takes all that to new levels of stupid. You don't have to be a genius computer science grad to understand that you never ever put SSNs in HTML and that whoever did that is at fault here.

Read More...
posted 2 days ago on techdirt
The All-in-One Microsoft, Cybersecurity, And Python Exam Prep Training Bundle has 6 courses to help you gain the skills needed to become a tech professional. The courses contain hands-on lessons and exam prep for Python MTA, ITIL, CompTIA Cybersecurity, and GDPR certification exams. The bundle is on sale for $29. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Read More...
posted 2 days ago on techdirt
So for years I've noted if you really want to understand why U.S. broadband is so crappy, you should take a long, close look at Frontier Communications in states like West Virginia. For decades the ISP has provided slow and expensive service, routinely failed to upgrade or repair its network, and generally personified the typical bumbling, apathetic, regional monopoly. And its punishment, year after year, has generally been a parade of regulatory favors, tax breaks, and millions in subsidies. At no point do "telecom policy leaders" or politicians ever try to do much differently. Case in point: Frontier, fresh off of an ugly bankruptcy, numerous AG and FTC lawsuits over repair delays, and repeated subsidy scandals, is positioning itself to nab yet more subsidies from the state of Wisconsin. Frontier is asking the state of for $35 million in additional grants, despite the fact Wisconsin was just one of several states whose AGs recently sued the company for being generally terrible. Folks familiar with the company argue it shouldn't be seeing a single, additional dime in taxpayer resources given fifteen years of scandal: "I hope the state will seriously consider the track record of companies to understand which ones have a long record of meeting the needs of residents and businesses,” Christopher Mitchell, director of the Community Broadband Networks Initiative, a Minnesota-based think tank supporting communities’ telecommunications efforts, said in an interview with The Badger Project. "Frankly, Frontier’s record suggests it should not receive a single additional dollar from any government,” he added. “Local companies, communities, and cooperatives have proven to be much better at turning public subsidies into needed networks." Keep in mind Frontier has been accused of taking state and federal subsidies on several occasions, misleadingly billing the government extra, then basically just shrugging when asked for the money back. To date nobody has done much about any of it. Also keep in mind Frontier routinely lobbies for (and often ghost writes) state laws banning towns and cities from building their own broadband networks. They're also directly responsible for the gutting of state and federal regulatory and consumer protection authority. Facing little real competition and feckless oversight in most states, nothing much changes. By design. Historically, state politicians and regulators ignore these kinds of problems, because, it should be made clear, they're corrupt. Regional monopolies find it immensely easy to throw a few bucks at state leaders in exchange for just mindless rubber stamping of whatever goal they're interested in (merger approvals, new subsidies, the gutting of consumer protections, tax breaks, zero accountability). That this strategy continually results in terrible, substandard, and expensive service never seems to enter into the picture. It's just rinse, wash, repeat in a long line of states. The Wisconsin State Public Service Commission is expected to grant or deny Frontier's request by the end of the month. The company is also first in line to grab new federal broadband funding from the Biden FCC. It will be curious to see if just a parade of unprecedented scandal reduces Frontier's ability to have millions in additional taxpayer money thrown at it in the slightest. My guess is it doesn't. At all. There are two, indisputable reasons U.S. broadband generally sucks: regional monopolization and the corruption that protects it. But when you see news articles, regulators, many think tankers, or politicians talking about broadband, notice how many are capable of even clearly acknowledging that fact, much less genuinely interested in actually doing anything about it.

Read More...
posted 3 days ago on techdirt
A college has done something dumb and unconstitutional. Not all that surprising. Neither is the response, coming from Adam Steinbaugh and FIRE (Foundation for Rights in Education). Emerson College may be a private university, but that doesn't mean it can just ignore the First Amendment. In fact, it says it won't ignore these rights, which obligates it to uphold them. This is Emerson College in its own words (archived link in case the college decides to disappear it): As an institution dedicated to Communication and the Arts, the first amendment of the US Constitution is of high importance. The right to freedom of speech, freedom of press, freedom of political belief and affiliation, freedom from discrimination, freedom of peaceful assembly, and petition of redress or grievances is not only a right but a community responsibility. [...] The College encourages students to present ideas, express their individuality and culture, and be open to thoughts or life styles that differ from their own. Truly inspiring. And Emerson College truly respects this right. Except when it doesn't. Emerson College suspended a campus chapter of conservative student group Turning Point USA on Oct. 1 after members passed out stickers critical of China’s government. The "conservative group" was Turning Point USA, one created and led by unfortunate human being Charlie Kirk and supported by people who think Charlie Kirk actually has anything useful to offer anyone. No matter what anyone thinks about TPUSA (including me!), this response is not only overblown, but completely ignores the content of the stickers Emerson (and some of its students) got all investigatory about. Under pressure from other student groups who accused TPUSA of anti-Asian bias and xenophobia, including the Emerson Chinese Student Association, the college launched an investigation into the group. In an Instagram video, the TPUSA chapter said the stickers are critical of the Chinese government, not the Chinese people. On Oct. 1, the TPUSA chapter’s leaders received a letter from Julie Rothhaar-Sanders, Emerson’s director of community standards, stating that the college had launched a formal investigation of TPUSA under Emerson’s Bias-Related Behavior and Invasion of Privacy policies. While the investigation is active, TPUSA faces “interim action,” meaning the group is barred from normal activities, such as hosting events or reserving campus space for meetings. Is this really "anti-Asian bias" and/or "xenophobia?" This is the sticker in question, which references a famous meme that originated in a multiplayer game: If you can't see the picture, it features a little "Among Us" spaceman guy dressed in red with a hammer-and-sickle insignia. Underneath it is the phrase "China Kinda Sus." "Sus" being short for "suspicious." Notably it does not say "Chinese people are sus" or "Orientals are sus" or anything else that suggests this sticker refers to anything but the country and, by extension, its government. Is China kinda sus? You be the judge. It refuses to recognize Taiwan as a country, has turned Hong Kong's government into an extension of its own following months of pro-democracy protests, subjects its citizens to intrusive, omnipresent surveillance, censors its citizens and companies providing internet services, and is engaged in the ongoing persecution of certain minorities. That's all pretty "sus." Yet, the college chose to believe this was actually an offensive thing to say and bypassed its own stated support for protecting First Amendment rights to limit TPUSA's activities on campus. That has led to FIRE and Adam Steinbaugh not-too-gently reminding the college about First Amendment protections and the college's promise to respect these rights. This is from FIRE's letter [PDF]: The stickers distributed at Emerson and elsewhere are critical of China’s government. They follow a long tradition of student protests on American college campuses criticizing foreign nations, whether those opposing South Africa’s apartheid or, more recently, the government of Israel. Freedom of expression entails the right to criticize not only our own government, but those of foreign nations, even when that criticism is offensive to the “dignity” of those states or threatens to upend “vital national interest[s.]” Even if the college is concerned about its obligations under Title VII, which requires it to investigate and respond to allegations of hostile student environments, this sticker ain't it. First, the speech is not based on race, ethnicity, or national origin. The stickers do not invoke or traffic in stereotypes associated with people of Chinese descent or origin. Instead, the stickers are speech critical of China’s government. The stickers utilize the familiar emblem of the sole governing party of the country, superimposed over a video game character bearing the same red color of China’s flag. The sticker’s text (“China kinda sus”) refers to the name of the country, not its people. Criticism of a foreign government is not inherently criticism of the people it purports to represent, even if people who hail from, descend from, or support that particular nation find that criticism personally offensive. Second, even assuming the stickers’ message was capable of being construed as speech based on race, ethnicity, or national origin, it does not rise to the level of peer-on-peer harassment as properly defined under the law. If Emerson wants to stay out of the lawsuit defendant business, it will drop this investigation and reinstate TPUSA's rights and privileges. If it would rather continue to pretend that criticism of a foreign government is somehow harassment of the student body, it should probably give its legal counsel department heads up that it will be expected to defend the indefensible in the near future. Oh, and even if you could make the argument that the combination of TPUSA and its stickers were problematic, Emerson took all this up a notch when its Twitter account started "hiding" any tweet that referenced China, including images of Winnie the Pooh. In case you don't recall, China has a longstanding policy of censoring images of Winnie the Pooh because its President, Xi Jinping, vaguely resembles the fictional bear. Wow. Emerson College—which is investigating a student group for stickers critical of China’s government—is hiding tweet replies that mention China. *Including ones that only show Winnie the Pooh, which is censored in China because people mockingly compare him to Xi Jinping.* https://t.co/PhwjFwnOHo pic.twitter.com/q0A6dgUF2s — Sarah McLaughlin (@sarahemclaugh) October 7, 2021 So, yeah, an American college was literally hiding tweets in the identical manner as the Chinese government, to avoid upsetting the Chinese President. Of course, that only resulted in a lot more posts about Winnie the Pooh, nearly all of which Emerson College has hidden. It also blocked users who were tweeting Winnie the Pooh images. Kinda sus, actually. And really, doesn't live up to the promise of a college that "encourages students to present ideas, express their individuality and culture, and be open to thoughts or life styles that differ from their own."

Read More...
posted 3 days ago on techdirt
It takes a special kind of hubris to appropriate music and lyrics not just from another artist, but another cultural genre of artists, and then threaten someone else for "stealing" what you've "stolen". Meet Barry Mann. If that name doesn't sound terribly familiar to you, fear not, as he is known for the 1961 hit song Who Put The Bomp? and other songs from decades ago. And if that song title doesn't sound familiar, you've almost certainly heard the song. To jog your memory, it includes such made up words as "ramalama ding dong". See, those are called vocables: made up syllables used to effectuate rhythmic form rather than meaning. You can listen to the song below to get an idea of what I'm talking about. "The Mann", which is what I'll be calling him from here on out, is still kicking at 82 and apparently is learning a new hobby: threatening other artists with copyright claims. He and/or his legal representatives apparently sent a cease and desist notice to Le Tigre, a feminist punk band, over a song called Decepticon. See, Decepticon takes a couple of lyrics found in The Mann's song and repurposes them to become a feminist anthem. For that and one additional reason that we'll get into later, Le Tigre filed suit for declaratory relief of The Mann's copyright infringement claim. Here is Decepticon so you can go hear for yourself just how copyright-infringe-y this all isn't. Between the suit and the song itself, you should notice a number of things. First off, you may be thinking to yourself that this song sounds decidedly retro for punk music. That's because the song came out twenty years ago and has long been Le Tigre's most famous song. Why a lawsuit is only being filed now is an open question. In addition, the use of the lyrics is minimal and the song itself is nothing remotely like The Mann's song. Additionally, even if Defendants had a legitimate claim to ownership of the small portion of Bomp lyrics at issue, they nonetheless have no copyright infringement claim against Le Tigre or its licensees because Le Tigre’s transformative use of those lyrics in Deceptacon is an emblematic case of fair use under Section 107 of the Copyright Act, 17 U.S.C. § 107. Transformative use? Let's get into that. You may also have noticed that the lyrics are actually slightly different. For instance, the lyric to start the song is no longer "who put the bomp", it's "who took the bomp". Deceptacon’s reference to and inversion of the Bomp lyrics at issue delivers a stinging indictment and parody of Bomp, which is clear from a comparison of the songs’ lyrics and sharply contrasting musical styles, as critics have noted over the decades. Bomp, written from a man’s perspective, begins with the statement: “I’d like to thank the guy who wrote the song that made my baby fall in love with me.” Bomp’s singer asks, “Who put the bomp in the bomp bah bomp bah bomp?” and “Who put the ram in the rama lama ding dong?” Deceptacon, by contrast, is a feminist anthem that begins with the proposition that music “is sucking my heart out of my mind” and continues to ask, “Who took the bomp from the bomp-a-lomp-a-lomp?” and “Who took the ram from the rama-lama-ding-dong?” Thus, Le Tigre’s use of the lyrics that appear in Bomp instills those lyrics with a new meaning that is directly at odds with and a clear criticism of the message in Bomp, which is precisely the sort of fair use that Section 107 of the Copyright Act is designed to protect. But parody and criticism of what? Well, there certainly is the feminist angle to it, yes, and Le Tigre is well known for creating that sort of stinging lyrics within its songs. But not just the feminist critique. Remember the change from "put" to "took"? Well... The Bomp lyrics putatively at issue are mainly comprised of song titles and non-lexical vocables (nonsense syllables used in music). But Mr. Mann did not create these vocables or song titles; rather, it appears that Mr. Mann and his cowriter copied them from Black doo-wop groups active during the late 1950s and early 1960s. Specifically, it appears that Mr. Mann took “bomp-bah-bomp-bah-bomp” from The Marcels’ distinctive version of “Blue Moon,” which sold over a million copies, and “rama lama ding dong” from the Edsels’ then-popular “Rama Lama Ding Dong.” In short, the Bomp lyrics at issue are not original to Mr. Mann, and Defendants have no legitimate copyright claim in them. And that is how this all comes full circle, in a way. The Mann threatened a punk feminist group over a song it created with lyrics designed to specifically criticize how he appropriated those lyrics from black doo-wop groups in the 60s. Like I said, that takes a nearly impressive amount of hubris. As far as copyright cases go, this should be an easy one for the courts.

Read More...
posted 3 days ago on techdirt
One of the more common violations of the First Amendment is viewpoint discrimination. When entities run into speech they don't like, they often steamroll Constitutional rights in their hurry to shut this speech down. The government is allowed some time and place restrictions on speech, but it is very limited in its options. To expand these options, government entities will often say things about "public safety" to justify their incursion on people's rights. These justifications rarely justify the overreach. Maybe these things happen because governments (incorrectly, in some cases) assume those whose rights have been abridged won't sue. Maybe they happen because governments assume nebulous "public safety" concerns won't be examined thoroughly if they are sued. Or maybe they just assume that, because they're using the public's money to both violate rights and defend against accusations of rights violations, none of this really matters because it isn't any particular government employee's money at stake. That brings us to this case [PDF], where a Maryland federal court has ruled the government had no justifiable reason to shut down a "prayer rally." What it did have were some unjustifiable reasons, which were mainly related to the speakers and the kind of speech the government expected to be uttered… I mean, if it hadn't unconstitutionally shuttered the event. (via Courthouse News Service) Here's some brief background by the court, which doesn't highlight the most likely trigger: alt-right figurehead Milo Yiannopoulos, who has been banned from [name a social media platform]. St. Michael’s, a non-profit organization, “is a vocal critic of the mainstream Catholic Church,” including the United States Conference of Catholic Bishops (“USCCB”). Plaintiff seeks to hold the prayer rally and conference to criticize the Church, particularly with respect to child sexual abuse committed by members of the clergy, and it wants to do so on a date that coincides with the USCCB’s Fall General Assembly. The USCCB plans to meet from November 15 – 18, 2021 at the Waterfront Marriott Hotel (“Hotel”), a private facility located near Pier VI. On or about August 5, 2021, weeks after plaintiff had paid a $3,000 deposit to SMG for use of the Pavilion, SMG, on instruction of the City, notified St. Michael’s that plaintiff could not rent the Pavilion. The City cited safety concerns linked to some of the people who were identified as speakers at the event. Given the average government's "for the children" protestations whenever it plans to violate rights, you'd think a rally criticizing a religious entity infamous for sexual abuse of children would be right up its rhetorical alley. You'd assume wrong -- not if its "allies" include people the elected officials of Baltimore find noxious. (That list includes Yiannopoulos, former Trump advisor Steve Bannon, and Newsmax commentator Michelle Malkin.) St. Michaels sued, alleging First Amendment violations. The court (unsurprisingly) agrees. First, it notes a similar rally by the same group in 2018 which resulted in no acts of violence or any other threats to public safety. Nevertheless, city officials insisted this time would be different. Michael Huber, Mayor Scott’s Chief of Staff, avers that the discussions between SMG and St. Michael’s “came to the attention” of the City in July 2021. In particular, the City learned that St. Michael’s planned a rally featuring speakers “known for encouraging violent actions that have resulted in injuries, death, and property damage.” In the City’s view, some of the speakers would “provoke a strong reaction and raise the potential for clashes and disturbances,” given the “very real potential [that the speakers] would use [the rally] to incite violence and public disruption.” While it's true some of the threat matrix may have changed following an unprecedented attack on the Capitol building in Washington, DC by so-called conservatives apparently hoping to negate a peaceful presidential election, no previous experience with this group should have led city officials to this conclusion. And, while the forum being rented was privately-owned, the city has some say in the issuance (and, in this case, rescinding) of contracts. When it interceded -- for internally inconsistent reasons -- it violated the plaintiff's rights. Without question, the City reacted to a perceived safety concern arising from past use of inflammatory remarks by some of the rally speakers. In thwarting the rally, the City essentially invoked or relied on the heckler’s veto. And, in doing so, it exercised complete, unfettered discretion; it acted on an ad hoc basis, without any standards. Further, it has presented somewhat shifting justifications for its actions, with little evidence to show that the decision was premised on these justifications. As to the matter of discretion, the City apparently has unbridled discretion to determine whether, when, and how to intervene in bookings of the Pavilion. The record before the Court indicates that the process used here was entirely ad hoc. After plaintiff’s plans came to the attention of the City, the City decided to intervene with SMG, requiring SMG to terminate negotiations with St. Michael’s. No policies, guidelines, or procedures have been brought to the attention of the Court providing any factors or systematized approach governing the City’s actions here. As far as the Court is aware, none exist. As the court notes, the main concern the city had appeared to be about those who would show up and protest the St. Michael's protest, rather than the supposed "incendiary" participants working with St. Michael's. That only adds to the list of ways the city violated the First Amendment. The City’s invocation of a heckler’s veto also raises serious concerns that its decision was motivated by viewpoint discrimination. Huber cited the prospect of counter protestors when explaining the City’s decision. And, at the hearing, counsel for the City placed considerable weight on the City’s concerns as to counter protestors and the disruption and potential violence that might ensue. In other words, the City seems to have based its decision on the anticipated reaction of counter protestors, which is precisely the “persistent and insidious threat[s] to first amendment rights” discussed in Berger, 779 F.2d at 1001… This is not an acceptable justification for regulating speech. And more along those same lines: As the Ninth Circuit put it in Seattle Mideast Awareness Campaign, although this concern might receive less weight outside of a traditional or designated public forum context, it is still relevant when “used as a mere pretext for suppression expression” based on viewpoint. This includes, for example, “where the asserted fears of a hostile audience reaction are speculative and lack substance.” Such is the case here. The City cannot conjure up hypothetical hecklers and then grant them veto power. St. Michael's gets its injunction against the City of Baltimore. The show will go on. The City violated the group's rights when it decided the people who didn't secure the venue were so potentially dangerous the speakers who rented the venue shouldn't be allowed to speak. A heckler's viewpoint is indistinguishable from viewpoint discrimination in situations like these. The city decided in favor of one viewpoint (the counterprotesters [a.k.a., the hecklers] and decided the other viewpoint (St. Michael's) had no right to be heard.

Read More...
posted 3 days ago on techdirt
More than two years ago we wrote about a truly bizarre ruling in a truly bizarre copyright lawsuit against Cloudflare. As you (perhaps?) know, Cloudflare is a popular CDN provider, helping websites (including Techdirt) provide better access to users while helping to mitigate things like denial of service attacks. In this case, the plaintiffs, Mon Cheri Bridals -- a maker of bridal dresses -- sued Cloudflare because websites out there were selling counterfeit dresses. If you know anything about copyright (and counterfeiting) law, you should be scratching your head. Counterfeiting is not about copyright. It's about trademark. But the dress company (for reasons I still don't understand), made the stretchiest of stretchy arguments to say that (1) the counterfeit sellers were posting images of the dresses, and (2) those images were protected by a copyright held by the dress maker, and (3) because the counterfeiting sites posting the allegedly copyright infringing photos used Cloudflare for CDN (not hosting) services, that somehow makes them contributory liable for the copyright infringement. Even worse, the complaint itself was extremely confused about the DMCA and how it works with regards to the DMCA 512 safe harbors. Different companies are treated differently under 512, and Section (b) companies for "system caching" (which is what CDNs do) are treated differently under the law than Section (c) hosting companies. However, the whole "notice and takedown" aspect of the law only applies to Section (c) type companies. But the lawsuit simply ignored that and assumed that Cloudflare should be a (c) company, rather than a (b). And, astoundingly, as we wrote about two years ago, the judge refused to dismiss the case, but let it move forward past the motion to dismiss stage -- meaning that it went through some very expensive discovery and other efforts before finally getting to the summary judgment stage, and now more than two years later, the judge granted dismissal on summary judgment. And, kinda like his refusal to dismiss, the opinion is kinda short and doesn't get into much in the way of detail. But at least this time it gets it right. The plaintiffs have not presented evidence from which a jury could conclude that Cloudflare’s performance-improvement services materially contribute to copyright infringement. The plaintiffs’ only evidence of the effects of these services is promotional material from Cloudflare’s website touting the benefits of its services. These general statements do not speak to the effects of Cloudflare on the direct infringement at issue here. For example, the plaintiffs have not offered any evidence that faster load times (assuming they were faster) would be likely to lead to significantly more infringement than would occur without Cloudflare. Without such evidence, no reasonable jury could find that Cloudflare “significantly magnif[ies]” the underlying infringement. Amazon.com, Inc., 508 F.3d at 1172. Nor are Cloudflare’s services an “essential step in the infringement process.” Louis Vuitton Malletier, 658 F.3d at 944. If Cloudflare were to remove the infringing material from its cache, the copyrighted image would still be visible to the user; removing material from a cache without removing it from the hosting server would not prevent the direct infringement from occurring. Cloudflare’s security services also do not materially contribute to infringement. From the perspective of a user accessing the infringing websites, these services make no difference. Cloudflare’s security services do impact the ability of third parties to identify a website’s hosting provider and the IP address of the server on which it resides. If Cloudflare’s provision of these services made it more difficult for a third party to report incidents of infringement to the web host as part of an effort to get the underlying content taken down, perhaps it could be liable for contributory infringement. But here, the parties agree that Cloudflare informs complainants of the identity of the host in response to receiving a copyright complaint, in addition to forwarding the complaint along to the host provider. This is the correct ruling, but it should have come two years ago at the motion to dismiss stage. Indeed, despite not being a Section 230 case, this is yet another example of why Section 230's procedural benefits are so important. Perhaps one reason why people don't get this is that they don't understand just how much more expensive a lawsuit gets after a motion to dismiss, but it's a massive shift. A motion to dismiss may run in the 10s of thousands dollars range (depending on a variety of factors). But if you get past that and have to go to discovery, you're now talking hundreds of thousands of dollars, and possibly pushing over a million before you get a ruling on summary judgment. It's a big difference and a massive cost for companies (especially smaller ones). A cost that can completely destroy smaller companies -- for a lawsuit that had no chance at all from the beginning.

Read More...
posted 3 days ago on techdirt
The Chicago PD -- fronted by the Chicago Fraternal Order of Police (FOP) [itself fronted by John Catanzara, "one of the most frequently-disciplined officers in the history of the Chicago PD"] -- is fighting the city of Chicago's vaccine mandate. Yes, the thin blue line between criminals and the safety of the public has decided it will not stand between the spread of the virus and the safety of the public. Or, indeed, the safety of its officers, apparently. As COVID-19 continues to kill more officers five times faster than gunfire, Chicago PD officers have decided they'd rather die from something preventable than receive a vaccine. Disgraced-officer-turned-police-union-president John Catanzara is the one making the most noise about the city's mandate and is weaponizing the PD's lack of self-care against the mayor and the city itself. Officers in Chicago had a deadline of midnight Thursday to disclose their vaccine status to the city or be placed on unpaid leave, Catanzara said. Lightfoot said the city would take the weekend to check with officers who haven't complied before putting them on unpaid leave, and that she didn't think that would happen Saturday or Sunday. Lightfoot said officers should report for duty until they're told by supervisors that they've been placed on leave. "If we suspect the numbers are true and we get a large number of our members who stand firm on their beliefs that this is an overreach, and they're not going to supply the information in the portal or submit to testing, then it's safe to say the city of Chicago will have a police force at 50% or less for this weekend coming up," Catanzara said. "That is not because of the FOP, that is 100% because of the mayor's unwillingness to budge from her hard line. So whatever happens because of the manpower issue, that falls at the mayor's doorstep." The city apparently takes this threat seriously. It has asked a Cook County (IL) court for an injunction to block the FOP from speaking out against the vaccine mandate, which covers all city workers. The FOP has sued right back, seeking an injunction blocking the city from enforcing the mandate. In the meantime, the PD itself has cancelled all vacation and time off requests to ensure staffing once the mandate goes into effect. The resistance against the mandate is inexplicable, given the alarming number of law enforcement deaths the virus has caused. Law enforcement agencies demand tougher laws and increased funding every time there's a spike in officer deaths at the hands of suspected criminals. But when it's a preventable disease doing the killing, cops would rather go jobless than be inoculated. This resistance is being led by a man who's mostly known for violating rights and running his mouth. If Catanzara truly speaks for the Chicago PD rank-and-file, perhaps a 50% layoff isn't a bad idea, especially if the city can restock with new officers who aren't already accustomed/resigned to the PD's long history of brutality, violence, rights violations, and nonexistent oversight. Then again, you'd think the rank-and-file would want to continue trying to get a handle on increasing gun violence in the city, given the oft-stated concerns about public safety. But none of that makes this an acceptable interim outcome. The litigation crossfire has resulted in an additional violation of rights. A judge late Friday issued a temporary restraining order against the Chicago police union president, prohibiting him from making public statements that encourage members not to report their COVID-19 vaccine status to the city. Cook County Circuit Judge Cecilia Horan ruled there was potential irreparable harm if local Fraternal Order of Police President John Catanzara persisted in making such statements. City attorneys argued they were tantamount to him advocating “sedition” and “anarchy” because he was directing members to disobey an order from their superiors. Catanzara's agitating may be aggravating and annoying but it is not "sedition" or "anarchy." His statements may run contrary to the city's wishes but he should be free to make them. He has not actually called for strike (which is forbidden by the PD's contract) but rather suggested officers' should refuse requests for vaccination status and memorialize these interactions via body cam if possible. Catanzara has also speculated that more than half the police force will no longer be employed if a COVID vaccination is a job requirement. This court order appears to be prior restraint -- something impermissible even with the city's obvious interest in ensuring the safety of its employees. That being said, maybe Chicago police officers should do the thing they're always telling citizens to do: comply, comply, comply. If it appears rights are being violated, members of the public are expected to take their lumps first and sue about it later. That's what the PD should do here: supply the PD with their vaccination status and get aggrieved later -- that's if they care at all about being the thin blue line standing between the innocent public and violent criminals.

Read More...
posted 3 days ago on techdirt
Wireless subscribers of Verizon's Visible prepaid service received a rude awakening after hackers compromised their account, then ordered expensive new iPhones on their dime. Last week a company statement indicated that "threat actors were able to access username/passwords from outside sources," then utilize that access to login to Visible customer accounts. Hacked users say the attackers then utilized that access to order expensive kit, and, initially, getting Visible to do anything about it was a challenge: Great, someone hacked my @visible account, purchased iPhone using my PayPal, and changed the password. @visiblecare is not responding. Scammer also tricked me with email spams in an effort to make me miss any email notifications from Visible. — Kristian Kim (@kristiankim) October 13, 2021 The company seemed to initially claim this was an instance of "credential stuffing," or hackers obtaining login information obtained from other hacks or breaches of other services, then testing those logins in as many services as they can find. But experts doubted that claim, noting that the company had been complaining about issues with its chat services before acknowledging the hack. More specifically, Visible support reps were telling users that ambiguous "technical issues" had left it incapable of making any changes to customer accounts. There are also questions about when the company knew about the hacks, with it initially trying to claim last week that the hack and subsequent iPhone orders were an ordinary system error: Although Visible made a public statement yesterday, the company first acknowledged the issue on Twitter on October 8. At the time, Visible provided a vague reason: order confirmation emails erroneously sent out by the company. "We're sorry for any confusion this may have caused! There was an error where this email was sent to members, please disregard it," the company told a customer. Again, this is where just a basic, internet-era privacy law requiring greater transparency (and perhaps a little more accountability for industries and executives that not only keep failing to secure user data, but clearly aren't great about being honest with their users) would come in kind of handy. Instead we keep just looking at the problem and shrugging because purportedly drafting competent privacy laws with any competency is deemed impossible, letting the repercussions pile up.

Read More...
posted 3 days ago on techdirt
The MacOS 11 Course is here to help with your understanding of macOS's core functionality. You'll learn how to configure key services, perform basic troubleshooting, and support multiple users with essential macOS capabilities. This course is great for help desk professionals, technical coordinators, or power users who support macOS users and manage networks. It's on sale for $30. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Read More...
posted 4 days ago on techdirt
People suing Twitter and Facebook for acts of violence committed by terrorists have yet to talk a court into agreeing with their arguments. Utilizing federal anti-terrorism laws as a way to circumvent discussion of First Amendment and Section 230 issues has worked to a certain extent. It may not have handed any wins to plaintiffs, but it has prevented precedent that would work against these clients (and their law firms -- both of them) when attempting to define "insanity" through repeated failure. Via Eric Goldman comes another loss in court for plaintiffs attempting to sue social media companies over an act of terrorism, in this case the mass shooting in an Orlando, Florida nightclub that appears to have no ties to any organized terrorist group. Despite being given multiple attempts to convert the complaint into something actionable, the plaintiffs failed to do so. This is largely because social media companies aren't even indirectly responsible for acts of terrorism. More specifically in this case, the Pulse Nightclub shooting wasn't even, legally speaking, an act of international terrorism. That means there's no cause of action under the plaintiffs' legal vehicle of choice, the Anti-Terrorism Act. From the Eleventh Circuit Court of Appeals decision [PDF]: We are deeply saddened by the deaths and injuries caused by Mr. Mateen’s rampage, but we agree with the district court that the plaintiffs failed to make out a plausible claim that the Pulse massacre was an act of “international terrorism” as that term is defined in the ATA [Anti-Terrorism Act]. And without such an act of “international terrorism,” the social media companies—no matter what we may think of their alleged conduct—cannot be liable for aiding and abetting under the ATA. The shooter was an American citizen. He "self-radicalized" with the alleged assistance of social media platforms. He pledged allegiance to ISIS while barricading himself with hostages following the shooting. ISIS arrived shortly thereafter to claim it supported the shooting and the shooter. But there's nothing "international" about this. And the Appeals Court isn't willing to read the ATA as expansively as the plaintiffs choose to. The Pulse shooting… did not transcend national boundaries in terms of the persons it was “intended to intimidate or coerce.” The plausible inference from the plaintiffs’ allegations is that a mass shooting on United States soil is meant to terrorize American citizens and residents. To come to the contrary conclusion we would have to say (or infer) that any act of domestic terrorism, anywhere in the world, is meant to intimidate or coerce all of humankind. And if that were the case, we doubt that Congress would have included this limiting language in the ATA. Because these claims fail to carry the lawsuit, the court takes no note of the Section 230 and First Amendment implications. That's a bit unfortunate because dismissing lawsuits under ATA and state law claims hasn't stopped these law firms and lawyers from filing multiple, nearly identical lawsuits attempting to hold social media companies directly responsible for violent acts committed by their users. At some point, these issues may be addressed at the federal court level. But today is not that day. And if people still believe this is indicative of Section 230's faults, they should apprise themselves of the unavoidable fact that Section 230 does not immunize social media companies from allegations of federal law violations. Yes, it's almost impossible to sue terrorists for violent acts, but suing social media platforms won't actually result in justice, either.

Read More...
posted 4 days ago on techdirt
Last night, Donald Trump sent out a press release announcing (effectively) the launch of his new social network, "Truth Social." The press release shows that it's a bit more complicated than that. Trump is launching "Trump Media & Technology Group" which is entering into a reverse merger agreement to become listed as a public company in order to launch this new service. Apparently, Truth Social will let in "invited guests" next month, followed by a full launch in early 2022. The press release has the expected bombastically ridiculous quote from the former President. "I created TRUTH Social and TMTG to stand up to the tyranny of Big Tech. We live in a world where the Taliban has a huge presence on Twitter, yet your favorite American President has been silenced. This is unacceptable. I am excited to send out my first TRUTH on TRUTH Social very soon. TMTG was founded with a mission to give a voice to all. I'm excited to soon beginning sharing my thoughts on TRUTH Social and to fight back against Big Tech. Everyone asks me why doesn't someone stand up to Big Tech? Well, we will soon!" I mean, first off, it's an interesting world in which you can both claim to be silenced, but also (a) put out a press release read by millions and (b) launch your own damn social network. Doesn't seem that "silent" to me, but what do I know? Anyway, given that Trump claims that the mission here is "to give voice to all," I was pretty interested in Truth Social's terms of service which (unsurprisingly), make it clear they can kick you off for any reason they want at all. Specifically, it says: We reserve the right, but not the obligation, to: (1) monitor the Site for violations of these Terms of Service; (2) take appropriate legal action against anyone who, in our sole discretion, violates the law or these Terms of Service, including without limitation, reporting such user to law enforcement authorities; (3) in our sole discretion and without limitation, refuse, restrict access to, limit the availability of, or disable (to the extent technologically feasible) your access or any of your contributions or any portion thereof; or (4) otherwise manage the Site in a manner designed to protect our rights and property and to facilitate the proper functioning of the Site. In other words, like every other website out there, Truth Social will moderate content. Also, there's this: If we terminate or suspend your account for any reason, you are prohibited from registering and creating a new account under your name, a fake or borrowed name, your email address or the name of any third party, even if you may be acting on behalf of the third party. In addition to terminating or suspending your account, we reserve the right to take appropriate legal action, including without limitation pursuing civil, criminal, and injunctive redress. So, it sure sounds like Truth Social, giving voice to all, is pretty damn sure it's going to have to kick people off its site (and maybe even sue them!). And why might they kick you off? Well, there's a very long list of "Prohibited Activities." I'll highlight some of them... use the Site to advertise or offer to sell goods and services. How much do you want to bet that Trump himself will violate this? engage in unauthorized framing of or linking to the Site. If you LINK TO THE SITE in an "unauthorized" way, they can remove your account. trick, defraud, or mislead us and other users, especially in any attempt to learn sensitive account information such as user passwords; MISLEADING OTHER USERS is grounds for having your account banished. So much freedom. use any information obtained from the Site in order to harass, abuse, or harm another person. This kind of thing is standard on all social media sites -- but which is notable because it's what many Trumpists get kicked off other platforms for. And yet, here it is. use the Site as part of any effort to compete with us or otherwise use the Site and/or the Content for any revenue-generating endeavor or commercial enterprise. Any effort to use the site to make money and they'll shut you down. That's... weird. harass, annoy, intimidate, or threaten any of our employees or agents engaged in providing any portion of the Site to you. If you annoy Trump or any of his employees, no more Truth for you! upload or transmit (or attempt to upload or to transmit) viruses, Trojan horses, or other material, including excessive use of capital letters and spamming (continuous posting of repetitive text), that interferes with any party’s uninterrupted use and enjoyment of the Site or modifies, impairs, disrupts, alters, or interferes with the use, features, functions, operation, or maintenance of the Site. One of those is not like the others. Excessive use of capital letters? I mean, that's not banned on any other social media platform I know of. Seems a lot less "freedomy" than we were lead to believe. Also... excessive use of capital letters? Can I think of a famous social media user who regularly made use of that... hmm... who could that be...? Also, there's an indemnity clause (a pretty broad one actually), so that if they get sued for something, you may be on the hook for the legal bills: You agree to defend, indemnify, and hold us harmless, including our subsidiaries, affiliates, and all of our respective officers, agents, partners, and employees, from and against any loss, damage, liability, claim, or demand, including reasonable attorneys’ fees and expenses, made by any third party due to or arising out of: (1) use of the Site; (2) breach of these Terms of Service; (3) any breach of your representations and warranties set forth in these Terms of Service; (4) your violation of the rights of a third party, including but not limited to intellectual property rights; or (5) any overt harmful act toward any other user of the Site with whom you connected via the Site. Notwithstanding the foregoing, we reserve the right, at your expense, to assume the exclusive defense and control of any matter for which you are required to indemnify us, and you agree to cooperate, at your expense, with our defense of such claims. We will use reasonable efforts to notify you of any such claim, action, or proceeding which is subject to this indemnification upon becoming aware of it. So, when I first wrote this article I had a paragraph noting that once it actually launched I expected that Truth Social would speedrun the content moderation learning curve just like every MAGA-wannabe social network before it. But... of course, it's even faster than that. You see, despite saying it wasn't open, a bunch of fairly enterprising people figured out how to access the site and sign up for accounts via an unadvertised link. So someone set up a Donald J. Trump account. And posted quite an image. The "donaldjtrump" account of Trump's TRUTH Social has already been hacked. pic.twitter.com/LDQ5w24tcV — Drew Harwell (@drewharwell) October 21, 2021 Someone else created their own Donald Trump account: Was just able to setup an account using the handle @donaldtrump on 'Truth Social,' former President Donald Trump's new social media website. Although the site is not officially open, a URL was discovered allowing users to sign up anyway. pic.twitter.com/MRMQzjNhma — Mikael Thalen (@MikaelThalen) October 21, 2021 And a Mike Pence account: Anyone can create an account on Trump's social network TRUTH Social using a publicly available link. I literally just registered "mikepence." The site hasn't even launched yet and it's already this vulnerable pic.twitter.com/v9nPN8ibDS — Drew Harwell (@drewharwell) October 21, 2021 And thus we learned that, indeed, the "freedom loving" Truth Social... is already banning accounts. A hush falls upon TRUTH Social. pic.twitter.com/HSKMTMlEMU — Drew Harwell (@drewharwell) October 21, 2021 Freedom is so fleeting. I mean, it went away even faster than all those other Trumpist social media sites. Oh, and it seems notable that even pre-launch, Trump's "freedom" loving social media site has text ready to go for suspended accounts: You can no longer use your account, and your profile and other data are no longer accessible. You can still login to request a backup of your data until the data is fully removed, but we will retain some data to prevent you from evading the suspension. So... it seems that Truth Social has "permanent suspensions" all ready to go. Just like every other social media website, of course. But, at least they don't present themselves as the antithesis of those other social media sites while all doing the same thing. Either way, this can't be a good week for all those other super thirsty MAGA wannabe social networks. Gab is still out there pretending Trump is posting on its platform (they just repost every one of his press releases on his "verified" account and say they've "reserved" the account for him). Then, of course, there was Parler, who promised not to remove content, but whose former CEO gleefully told reporters that he was suspending "leftist trolls." And, then, of course, there was Gettr, which was taken over and "launched" by former close Trump aide, Jason Miller, and which also speedran the content moderation learning curve. Miller was unable to get his recently former boss to get on his own platform, and now they'll be competitors. And, hey, competition is a good thing. Let a million social networks bloom. But don't expect one that is actually "taking on Big Tech" or guaranteeing that everyone has a voice. Because that's not Truth Social. It's just more lies. Oh, and there are even some people suggesting that Truth Social is merely a reskin of Mastodon, as it matches almost exactly. If that's the case, then it's almost certainly violating Mastodon's AGPL license. And, considering that Truth Social's terms of service say that it is forbidden for you to: copy or adapt the Site’s software, including but not limited to Flash, PHP, HTML, JavaScript, or other code. Then it seems that it certainly would violate such an open source license. And, I'm not even going to ask why they're including "Flash" in that list other than to wonder where they copied this list from...

Read More...
posted 4 days ago on techdirt
For several years now, we've been beating the idea that content at moderation is impossible to get right, otherwise known as Masnick's Impossibility Theorem. The idea there is not that platforms shouldn't do any form of moderation, or that they shouldn't continue to try to improve the method for moderation. Instead, this is all about expectations setting, partially for a public that simply wants better content to show up on their various devices, but even more so for political leaders that often see a problem happening on the internet and assume that the answer is simply "moar tech!". Being an internet behemoth, Facebook catches a lot of heat for when its moderation practices suck. Several years ago, Mark Zuckerberg announced that Facebook had developed an AI-driven moderation program, alongside the claim that this program would capture "the vast majority" of objectionable content. Anyone who has spent 10 minutes on Facebook in the years since realizes how badly Facebook failed towards that goal. And, as it turns out, failed in both directions. By that I mean that, while much of our own commentary on all this has focused on how often Facebook's moderation ends up blocking non-offending content, a recent Ars Technica post on just how much hate speech makes its way onto the platform has some specific notes about how some of the most objectionable content is misclassified by the AI moderation platform. Facebook’s internal documents reveal just how far its AI moderation tools are from identifying what human moderators were easily catching. Cockfights, for example, were mistakenly flagged by the AI as a car crash. “These are clearly cockfighting videos,” the report said. In another instance, videos livestreamed by perpetrators of mass shootings were labeled by AI tools as paintball games or a trip through a carwash. It's not entirely clear to me just why the AI system is seeing mass shootings and animals fighting and thinking its paintball or carwashes, though I unfortunately have some guesses and they aren't fun to think about. Either way, this... you know... sucks! If the AI you're relying on to filter out extreme and violent content labels a mass shooting as a trip through the carwash, well, that really should send us back to the drawing board, shouldn't it? It's worse in other countries, as the Ars post notes. There are countries where Facebook has no database of racial slurs in native languages, meaning it cannot even begin blocking such content on the site, via AI or otherwise. Polled Facebook users routinely identify hate on the platform as its chief problem, but the company seems to be erring in the opposite direction. Still, Facebook’s leadership has been more concerned with taking down too many posts, company insiders told WSJ. As a result, they said, engineers are now more likely to train models that avoid false positives, letting more hate speech slip through undetected. Which may actually be the right thing to do. I'm not prepared to adjudicate that point in this post. But what we can say definitively is that Facebook has an expectations setting problem on its hands. For years it has touted its AI and human moderators as the solution to the most vile content on its platform... and it doesn't work. Not at scale at least. And outside of America and a handful of other Western nations, barely at all. It might be time for the company to just say so and tell the public and its representatives that this is going to take a long, long while before the company gets this anywhere close to right.

Read More...
posted 4 days ago on techdirt
Summary: Snapchat debuted to immediate success a decade ago, drawing in millions of users with its playful take on instant messaging that combined photos and short videos with a large selection of filters and "stickers." Stickers are graphics that can be applied to messages, allowing users to punch up their presentations (so to speak). Snapchat’s innovations in the messaging space proved incredibly popular, moving Snapchat from upstart to major player in a few short years. It also created more headaches for moderators as sent messages soared past millions per day to billions. Continuing its expansion of user options, Snapchat announced its integration with Giphy, a large online repository of GIFs, in February 2018. This gave users access to Giphy's library of images to use as stickers in messages. But the addition of thousands of images to billions of messages quickly resulted in an unforeseen problem. In early March of 2018, Snapchat users reported a search of the GIPHY image database for the word "crime" surfaced a racist sticker, as reported by Josh Constine for TechCrunch: “We first reported Instagram was building a GIPHY integration back in January before it launched a week later, with Snapchat adding a similar feature in February. But it wasn’t long before things went wrong. First spotted by a user in the U.K. around March 8th, the GIF included a racial slur.” — Josh Constine, TechCrunch Both platforms immediately pulled the plug on the integration while they sorted things out with GIPHY. Company Considerations: What measures can be put in place to prevent moderation problems from moving from one platform to another during cross-platform integration? What steps should be taken prior to launch to integrate moderation efforts between platforms?  What can "upline" content providers do to ensure content moving from their platforms to others meets the content standards of the "downline" platforms?  Issue Considerations: What procedures aid in facilitating cross-platform moderation?  Which party should have final say on moderation efforts, the content provider or the content user? Resolution: Instagram was the first to reinstate its connection with GIPHY, promising to use more moderators to examine incoming content from the image site: “We’ve been in close contact with GIPHY throughout this process and we’re confident that they have put measures in place to ensure that Instagram users have a good experience” an Instagram spokesperson told TechCrunch. GIPHY offered its own apology for the racist image, blaming the slipup on a bug in its filters. Here's what GIPHY's spokesperson told Gizmodo: After investigation of the incident, this sticker was available due to a bug in our content moderation filters specifically affecting GIF stickers. We have fixed the bug and have re-moderated all of the GIF stickers in our library. The GIPHY staff is also further reviewing every GIF sticker by hand and should be finished shortly. Snapchat was the last to reinstate its connection to GIPHY, stating it was working directly with the site to revamp both moderation systems to ensure offensive content would be prevented from being uploaded to GIPHY and/or making the leap to connected social media services. Originally published to the Trust & Safety Foundation website.

Read More...
posted 4 days ago on techdirt
The never-ending quest for improved quarterly returns means that things that technically shouldn't be luxury options, inevitably wind up being precisely that. We've shown how a baseline expectation of privacy is increasingly treated as a luxury option by hardware makers and telecoms alike. The same thing also sometimes happens to customer service; at least when companies think they can get away with it. "Smart home" and home security hardware vendor Arlo, for example, has announced a number of new, not particularly impressive subscription tiers for its internet-connected video cameras. The changes effectively involve forcing users to pay more money every month if they ever want to talk to a live customer service representative. From Stacey Higginbotham: "This week, Arlo launched what I generously think of as its pay-for-customer-service enticement for its smart home camera products. As of Oct. 4, customers without a subscription who’ve had their devices for more than 90 days no longer get phone support. And after one year, they lose access to live chat support." If you don't pay Arlo more money for actual customer service, you're relegated to cobbling together support solutions from the company's forums, an automated website chat bot, or elsewhere. Given the cost of Arlo products, the decision to make speaking to an actual human being a $3 to $15 monthly add on is fairly ludicrous: "Arlo’s customer support framework now requires a $2.99 to $14.99 per month Arlo subscription, a free trial plan, or the device to be within 90 days of purchase for phone support. Then you’re downgraded to chat support for the remainder of the year. After that, absent a plan, Arlo customers with problems will only have access to a virtual assistant or the public forums. That means no phone support and no chat. This feels pretty punitive for a product that can cost between $130 and $300 depending on the device." Even U.S. telecom giants, the poster children for atrocious U.S. customer service, haven't meaningfully pursued making live customer support a premium option (though they have tinkered with innately providing worse support to folks with low credit scores). Arlo's choice comes amidst higher shipping costs and supply chain issues during COVID, but the decision to try and recover those higher costs by making basic competency a luxury tier will likely come back to bite it in an IOT space that's only getting more competitive.

Read More...
posted 4 days ago on techdirt
Via Travel & Leisure comes this warning -- one the online magazine has decided to portray as exciting news. Delta Air Lines is expanding its partnership with the Transportation Security Administration with its use of facial recognition technology making getting through airport security even quicker. The airline is implementing a "digital identity experience" at its hub in Atlanta, offering customers with TSA PreCheck and a Delta SkyMiles number the chance to pass through security and board their flight without having to pull out a boarding pass or their ID. Ah, the "digital identity experience." That's apparently DeltaSpeak for "biometric collection and facial recognition deployment." That's been the DHS's plan all along. It may have pretended it only wanted to post up at ports of entry (i.e., international airports) but it is going to roll this out to as many airports as possible. How do we know "digital identity experience" is Delta PR? Because this "article" is largely a regurgitation of Delta's own press release about its increased biometric collection/domestic surveillance efforts. First unveiled in Detroit security checkpoints in early 2021, Delta’s digital identity experience is an industry first in exclusive partnership with TSA PreCheck. The experience is expanding to Atlanta, offering customers a more efficient way to navigate the airport – without showing a paper boarding pass or a physical government ID. With just one look at a camera, customers who qualify and opt in can easily and efficiently check a bag, pass through the TSA PreCheck security line and board their plane. You'll notice this "experience" is first being offered to PreCheck participants. This means people who have already registered and paid the government to buy back some of their Constitutional rights are the first to be invited to help expand Custom and Border Protection's biometric database. Delta is just the "partner." The TSA just mans the front end. The data and subsequent scans belong to the CBP. Once a customer reaches a camera at the airport, their image is encrypted and sent to U.S. Customs and Border Protection’s (CBP) facial biometric matching service via a secure channel with no accompanying biographic data. CBP then verifies a customer’s identity against government holdings and sends back an indicator to allow the customer to proceed. There are several things wrong with all of this, starting with Travel & Leisure's cheery stenography of cheery Delta's up-sale of government intrusion. While no one expects a leisure mag to get serious about the implications of expanding biometric collections and facial recognition programs to encompass US citizens traveling domestically, the article could have used less of the Delta's GO TEAM USA jargon and more actual facts: like how facial recognition AI -- all of it -- is fundamentally flawed. Or maybe just point out the unjustified but steady encroachment of surveillance tech into airports serving millions of travelers the government has no reason to suspect are up to no good. And Travel & Leisure may want to vet its sources. After all, Delta is more than happy to pretend things aren't the way they are as it pitches in to make traveling a worse experience for its customers. Atlanta’s domestic terminal south security checkpoint is the first in the U.S. that will be converted to computed tomography-automated screening lane (CT-ASL) systems - making the world’s busiest airport even more efficient as travelers connect to destinations around the world. That's simply not true. Increased efficiency may be the end goal but a recently released report by the DHS Inspector General says CT scanners are actually slowing down the screening process. TSA deployed CT systems to airport passenger screening checkpoints that did not meet minimum throughput requirements. TSA’s February 2018 Operational Requirements Document identified the need for a CT system capable of screening, on average, 200 items per hour to successfully perform the mission. However, we determined TSA purchased 300 CT systems capable of screening an average of 170 items per hour — 15 percent less than the minimum requirement, and less than the AT X-ray system capability of approximately 354 items per hour. This is a human centipede. The TSA says some shit. Delta Airlines reposts it with its own happy spin. And, at the tail end of it, consumer-facing sites are swallowing everything and dumping it on to web pages without bothering to question the sources or provide any information that might counterbalance the government's assertions about how more intrusive collections make America a better place to call home.

Read More...
posted 4 days ago on techdirt
Less than week from its horrendous decision to help China's censorship apparatus keep Chinese residents from accessing the accounts of American journalists, LinkedIn has announced it will no longer be offering the full-featured version of its quasi-social media platform in the country. (via the BBC) Specifically cited in senior vice president Morak Shroff's announcement is China's escalating censorship demands, albeit in a bit more non-specific terms. It also acknowledges Microsoft and LinkedIn made a calculated decision to do business with a government that had the power to shut it down (or run it off) if LinkedIn failed to satisfactorily acquiesce. Our decision to launch a localized version of LinkedIn in China in February 2014 was driven by our mission to connect the world's professionals to make them more productive and successful. We recognized that operating a localized version of LinkedIn in China would mean adherence to requirements of the Chinese government on Internet platforms. While we strongly support freedom of expression, we took this approach in order to create value for our members in China and around the world. We also established a clear set of guidelines to follow should we ever need to re-evaluate our localized version of LinkedIn in China. This strategy has enabled us to navigate the operation of our localized version of LinkedIn in China over the past seven years to help our members in China find a job, share and stay informed. While we’ve found success in helping Chinese members find jobs and economic opportunity, we have not found that same level of success in the more social aspects of sharing and staying informed. We’re also facing a significantly more challenging operating environment and greater compliance requirements in China. Given this, we’ve made the decision to sunset the current localized version of LinkedIn, which is how people in China access LinkedIn’s global social media platform, later this year. This makes LinkedIn the last US-based social media service to exit the Chinese market. It was preceded by all the rest, which have either been blocked or voluntarily parted ways with the country. And this announcement doesn't mean LinkedIn is completely done with China. The post says it will offer a stripped down version called "InJobs," which will only contain users' contact information and job histories. It's just LinkedIn with all the "social" stuff removed, like the ability to share posts or articles. That should reduce the number of censorship demands to near zero. But it's China, so demands will continue to be made. This slimmed-down version of LinkedIn still gives the government the option of vanishing accounts of people it doesn't like, making it more difficult for them to connect with potential employees and employers. On the whole, it's a good decision by LinkedIn. The past few years have seen demands ramp up. And they've also seen LinkedIn's compliance rise to meet the demand. It's not a good look for a US tech company, no matter how enticing a market of a billion potential users is.

Read More...
posted 4 days ago on techdirt
The Python, Git, And YAML Bundle has 9 courses to help you learn all about Python, YAML, and Git. Five courses cover Python programming from the beginner level to more advanced concepts. Three courses cover Git and how to use it for your personal projects. The final course introduces you to the YAML fundamentals. The bundle is on sale for $29. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Read More...
posted 4 days ago on techdirt
For years telecom executives, jealous of internet services and ad revenue, have demanded that content and services companies pay them an extra toll for no reason. You saw this most pointedly during the net neutrality fracas, when AT&T routinely insisted Google should pay it additional money for no coherent reason. Telecom execs have also repeatedly claimed that Netflix should pay them more money just because. Basically, telecoms have tried to use their gatekeeper and political power to offload network investment costs to somebody else, and have spent literally the last twenty years using a range of incoherent arguments to try and justify it with varying degrees of success. While these efforts quieted down for a few years, they've popped back up recently thanks to, of all things, Netflix's Squid Game. In South Korea, ISPs have demanded that Netflix pay them more money because of the streaming demand the popular show places on their networks. As we noted then this makes no coherent sense, given ISPs build their networks to handle peak capacity load; what specific type of traffic causes that load doesn't particularly matter. It's just not how network engineering or common sense work. That's not stopping telecom executives around the world, of course. Across the pond, British Telecom Chief Executive Marc Allera has trotted out the same argument there, claiming that a surge in usage (during a pandemic, imagine that) is somehow Netflix's problem: "Every Tbps (terabit-per-second) of data consumed over and above current levels costs about £50m,” says Marc Allera, the chief executive of BT’s consumer division. “In the last year alone we’ve seen 4Tbps of extra usage and the cost to keep up with that growth is huge.” An overwhelming majority of day-to-day usage, up to 80%, is accounted for by only a handful of companies such as YouTube, Facebook, Netflix and the games company Activision Blizzard." But again that's not how any of this works. ISPs build out network infrastructure based on managing peak demand. It doesn't matter whether that demand originates from Squid Game or video gaming. As an ISP it's your responsibility to meet consumer and enterprise demand, since that's what they already pay you an arm and a leg for. Consumers and businesses alike already pay ISPs for bandwidth and transit; often accompanied by a steady array of consistent price hikes. ISPs are effectively asking for yet another additional troll toll, you know, just because. Whether talking about Netflix or Google, one core component of this telecom executive argument is always that tech companies are "getting a free ride": "A lot of the principles of net neutrality are incredibly valuable, we are not trying to stop or marginalise players but there has to be more effective coordination of demand than there is today,” he says. “When the rules were created 25 years ago I don’t think anyone would have envisioned four or five companies would be driving 80% of the traffic on the world’s internet. They aren’t making a contribution to the services they are being carried on; that doesn’t feel right." But nobody gets a "free ride" in telecom. Consumers and companies alike pay increasingly more money for bandwidth. And in the case of companies like Google and Netflix, they pay billions of dollars for expedited transit, undersea cable routes, CDNs (which Netflix provides ISPs for free), and even (in Google's case) their own residential ISPs. Netflix also has a long history of providing users different tools to limit streaming so it doesn't run afoul of user broadband caps. Suggesting they somehow get a free ride and should pay another troll toll just because makes absolutely no sense. It's a dusty old talking point that originated with AT&T nearly twenty years ago that began the net neutrality debates. Its origins are simply greed. Telecom execs are simply trying to offload the costs of network investment (their job) to somebody else to make investors happy. This somehow gets dressed up into something far more elaborate than it actually is.

Read More...