posted about 4 hours ago on techdirt
Did ancient Egyptian parents worry their kids might get addicted to this game, called senet? Keith Schengili-Roberts/Wikimedia Commons, CC BY-SA Video games are often blamed for unemployment, violence in society and addiction – including by partisan politicians raising moral concerns. Blaming video games for social or moral decline might feel like something new. But fears about the effects of recreational games on society as a whole are centuries old. History shows a cycle of apprehension and acceptance about games that is very like events of modern times. From ancient Egyptian hieroglyphs, historians know that the oldest examples of board games trace back to the game of senet around 3100 B.C. One of the earliest known written descriptions of games dates from the fifth century B.C. The Dialogues of the Buddha, purport to record the actual words of the Buddha himself. In them, he is reported to say that “some recluses… while living on food provided by the faithful, continue addicted to games and recreations; that is to say…games on boards with eight or with 10, rows of squares.” That reference is widely recognized as describing a predecessor to chess – a much-studied game with an abundant literature in cognitive science and psychology. In fact, chess has been called an art form and even used as a peaceful U.S.-Soviet competition during the Cold War. Despite the Buddha’s concern, chess has not historically raised concerns about addiction. Scholars’ attention to chess is focused on mastery and the wonders of the mind, not the potential of being addicted to playing. Somewhere between the early Buddhist times and today, worries about game addiction have given way to scientific understanding of the cognitive, social and emotional benefits of play – rather than its detriments – and even viewing chess and other games as teaching tools, for improving players’ thinking, social-emotional development and math skills. A die among other playing pieces from the Akkadian Empire, 2350-2150 B.C., found at Khafajah in modern-day Iraq. CC BY-SA Games and politics Dice, an ancient invention developed in many early cultures, found their way to ancient Greek and Roman culture. It helped that both societies had believers in numerology, an almost religious link between the divine and numbers. So common were games of dice in Roman culture that Roman emperors wrote about their exploits in dice games such as Alea. These gambling games were ultimately outlawed during the rise of Christianity in Roman civilization, because they allegedly promoted immoral tendencies. More often than not, the concerns about games were used as a political tool to manipulate public sentiment. As one legal historian puts it, statutes on dice games in ancient Rome were only “sporadically and selectively enforced … what we would call ‘sports betting’ was exempted.” The rolling of dice was prohibited because it was gambling, but wagering on the outcomes of sport were not. Until of course, sports themselves came under fire. The history of the “Book of Sports”, a 17th-century compendium of declarations of King James I of England, demonstrates the next phase of fears about games. The royal directives outlined what sports and leisure activities were appropriate to engage in after Sunday religious services. In the early 1600s, the book became the subject of a religious tug of war between Catholic and Puritan ideals. Puritans complained that the Church of England needed to be purged of more influences from Roman Catholicism – and liked neither the idea of play on Sundays nor how much people liked doing it. In the end, English Puritans had the book burned. As a Time magazine article put it, “Sport grew up through Puritanism like flowers in a macadam prison yard.” Sports, like board games of the past, were stifled and the subject of much ire in the past and present. Retro Report explains the pinball-machine bans of the mid-20th century. Pinball in the 20th century In the middle part of the 20th century, one particular type of game emerged as a frequent target of politician concern – and playing it was even outlawed in cities across the country. That game was pinball. But the parallels with today’s concerns about video games are clear. In her history of moral panics about elements of popular culture, historian Karen Sternheimer observed that the invention of the coin-operated pinball game coincided with “a time when young people – and unemployed adults – had a growing amount of leisure time on their hands.” As a result, she wrote, “it didn’t take long for pinball to show up on moral crusaders’ radar; just five years spanned between the invention of the first coin-operated machines in 1931 to their ban in Washington, D.C., in 1936.” New York Mayor Fiorello LaGuardia argued that pinball machines were “from the devil” and brought moral corruption to young people. He famously used a sledgehammer to destroy pinball machines confiscated during the city’s ban, which lasted from 1942 to 1976. An early pinball machine, before the innovation of flippers to keep the ball in play longer. Huhu/Wikimedia Commons His complaints sound very similar to modern-day concerns that video games contribute to unemployment at a time when millennials are one of the most underemployed generations. Even the cost of penny arcade pinball machines raised political alarms about wasting children’s money, in much the way that politicians declare they have problems with small purchases and electronic treasure boxes in video games. As far back as the Buddha’s own teachings, moral leaders were warning about addicting games and recreations including “throwing dice,” “Games with balls” and even “turning somersaults,” recommending the pious hold themselves “aloof from such games and recreations.” Then, as now, play was caught in society-wide discussions that really had nothing to do with gaming – and everything to do with keeping or creating an established moral order. Lindsay Grace, Knight Chair of Interactive Media; Associate Professor of Communication, University of Miami. This article is republished from The Conversation under a Creative Commons license. Read the original article.Permalink | Comments | Email This Story

Read More...
posted about 8 hours ago on techdirt
California has become the first state in the US to ban facial recognition tech use by local cops. Matt Cagle has more details on the ACLU-backed law. Building on San Francisco's first-of-its-kind ban on government face recognition, California this week enacted a landmark law that blocks police from using body cameras for spying on the public. The state-wide law keeps thousands of body cameras used by police officers from being transformed into roving surveillance devices that track our faces, voices, and even the unique way we walk. Importantly, the law ensures that body cameras, which were promised to communities as a tool for officer accountability, cannot be twisted into surveillance systems to be used against communities. As Cagle points out, San Francisco was the first city in the nation to ban use of facial recognition by city agencies. Oakland followed closely behind. And all the way on the other side of the country, Somerville, Massachusetts became the second city in the US to enact a facial recognition ban. This statewide ban will hopefully lead to others around the nation. The tech multiple companies are pushing government agencies to adopt is unproven, at best. The rate of false positives in live deployments is alarming. Just as alarming is the flipside: false negatives that allow the people, who law enforcement agents are actually looking for, to slip away. Despite this, everyone from the DHS to local police departments thinks this is the next wave of acceptable surveillance -- one that allows government agencies to, in essence, demand ID from everyone who passes by their cameras. The resistance to facial recognition's seemingly-unchecked expansion is finally having some effect. Axon (formerly Taser) has temporarily halted its plans to introduce facial recognition tech into its body cameras and Google is stepping away from its development of this tech for government agencies. Unfortunately, Amazon has shown no desire to step away from the surveillance state precipice and is continuing to sell its own brand of facial recognition to law enforcement agencies as well as co-opting citizens' doorways into its surveillance network with its Ring doorbell/cameras. It's a solid win for residents of the state. The ban blocks the use of facial recognition tech by state law enforcement until the end of 2022. It also blocks the use of other biometric surveillance tech and prevents law enforcement from using existing biometric data to feed any predictive policing tools agencies might be using or planning on implementing. With more states and cities willing to at least undertake serious discussions of the implications of facial recognition tech, it's unlikely California will remain the odd state out in the biometric surveillance race. Permalink | Comments | Email This Story

Read More...
posted about 10 hours ago on techdirt
We've covered a lot of data breaches on this site over the years. Most involve the leakage of personal info via unsecured databases or careless data handling. But I doubt we've covered anything as bizarre as this. (via Databreaches.net) A Devon hospital has apologised after a caller’s voicemail, containing personal patient details, became the hospital’s answerphone message for more than seven hours. During that time the caller was inundated with calls from patients giving details about their health problems believing they were ringing North Devon District Hospital in Barnstaple. Somehow, through the magic/convolutions of business phone systems, the message a woman left while calling to set an appointment for her husband somehow became the message greeting callers who were unable to reach a live human being. Adding inconvenience to possibly tortious injury, the hospital somehow managed to route a number of inbound calls to the person whose message it had accidentally co-opted, resulting in the person (who had yet to discover her personal information had been compromised) fielding phone calls from other patients, who ended up sharing their personal info with a complete stranger. The woman, who asked not to be named, said: “I didn’t think any more of it until an hour and a half later an elderly man called our home phone talking about his private parts as he had a problem and had to have an operation. “I said to him, ‘I’m ever so sorry but I don’t know what you’re talking about?’. He replied, ‘they have given me your number’. The hospital's explanation for this incident isn't very reassuring. It places the blame on outdated equipment. Unfortunately for people who don't want their personal info handed over to complete strangers, there's no telling how many public and private entities could make the same claim about their phone systems. She said: “The phone lines were redirected and I was told it was completely human error because some parts of the hospital are still using old answer machines." And yet old answering machines are operated all the time without turning a message someone left into a voicemail greeting. Sure, it's not impossible. But good god is it ever unlikely. Stupidity before malice, as the saying goes. There's no conceivable reason the hospital would want to generate this kind of press, so it would be irrational to think someone did this to deliberately harm this person. But harm was done nonetheless, and the combination of the UK's Data Protection Act and the GDPR could result in a pretty hefty fine for the hospital. The going rate is "4% of turnover [gross revenue]" -- something that has seen maximum fines rise from £500,000 (the amount charged Equifax) to £183 million (levied against British Airways). Since the Devon hospital is unlikely to replace its hardware immediately, the risk of repetition still remains. Considering it's apparently never happened before, the risk is low -- but certainly not nonexistent. Adding humans to outdated tech will sometimes result in errors that aren't easily replicated. Given that we've heard nothing comparable to this in the many years this blog has been running, this hospital's inadvertent use of patient's sensitive message as its own answering machine greeting is likely to remain a data breach unicorn. Permalink | Comments | Email This Story

Read More...
posted about 12 hours ago on techdirt
Blizzard has found itself trying to navigate its self-made storm over the past several weeks. It started when a professional Hearthstone player relayed a message of support for the ongoing protests in Hong Kong, leading Blizzard to issue a 1 year ban and pull back prize money for that player. With many eSport and IRLsport leagues either being directly confronted by the regime in Beijing, or simply self-censoring in fear of such a confrontation, the whole ecosystem of eGaming has felt the effects of Blizzard's actions. And, while Blizzard eventually did lighten the punishment it had initially doled out, the company also thumbed its nose at the principle complaint in the protests: that Blizzard was kneeling at an altar constructed of the Chinese government's thin skin. And now the company is simply doubling down. Earlier this month, American students at American University held up a sign during a competition stream that read, "Free Hong Kong, Boycott Blizz." True to its earlier lack of spine, Blizzard has responded by issuing the team a 6 month ban from competitions. American University Hearthstone players who recently held up a sign calling for Hong Kong’s freedom during a livestream have been officially disciplined by Activision-Blizzard. In a Twitter post today, team member Casey Chambers stated that the team has been banned from competitive play for six months. When a punishment from Blizzard to similar to Blitzchung’s was not forthcoming, the team voluntarily dropped out of future tournaments. Now, they’ve been officially banned for half a year. Interestingly, the American University team appeared to be trying to make a very specific point by getting banned. The team clearly saw inequity in the punishment for Blitzchung being both swift and severe, while their actions went unpunished at first. To that end, the team voluntarily dropped out of competition, it appears as part of its call to protest Blizzard generally. When the punishment eventually did come down, team member Casey Chambers tweeted that he was pleased it did. Happy to announce the AU Hearthstone team received a six month ban from competition. While delayed I appreciate all players being treated equally and no one being above the rules. pic.twitter.com/mZStoF0e0t — Casey Chambers (@Xcelsior_hs) October 16, 2019 He later responded to someone claiming that Blizzard was violating its own call for "every voice to matter" with the ban by stating, "Nah bro. We knew what we were doing." All of which is entirely besides the point. When Hearthstone competitors have reached the point of trying to get themselves banned to make a point, never mind actively calling for a boycott of Blizzard, it signals that the company is losing the PR war in America. What Blizzard now has to decide is what the math is on the value of pissing off the American public versus keeping Beijing happy. Based on this most recent 6 month ban, it looks like the company thinks it can thread a needle that I'm not sure actually exists. Permalink | Comments | Email This Story

Read More...
posted about 13 hours ago on techdirt
In the name of securing the homeland, Congressional reps are tossing around the idea of regulating online speech. This isn't the first effort of its type. There's always someone on Capitol Hill who believes the nation would be safer if the First Amendment didn't cover quite so much speech. But this latest effort is coming directly from the Congressional committee that oversees homeland security efforts, as the Hill reports. Civil liberties and technology groups have been sharply critical of a draft bill from House Homeland Security Committee Democrats on dealing with online extremism, saying it would violate First Amendment rights and could result in the surveillance of vulnerable communities. The whole thing sounds a bit innocuous. At first. The bill would create a bipartisan commission to develop recommendations for Congress to address online extremism. The committee would have to balance these recommendations with existing speech protections. But it's easy to see how certain inalienable rights will become more alienable if this committee decides national security interests are more important than the rights of the people it's securing. When you get into the details, you begin to see how this isn't really about making Congress do more to address the problem. It's about regulating online speech via Congressional action. The end result will be censorship. And self-censorship in response to the chilling effect. The government-appointed body would be given the power to subpoena communications, a sticking point that raised red flags for First Amendment advocates concerned about government surveillance. A source familiar with the legislation told The Hill they were immediately concerned that the subpoena power could be abused, questioning whether it would unintentionally create another avenue for the government to obtain private conversations on social media between Americans. The draft bill would require companies to "make reasonable efforts" to remove any personally identifiable information from any communications they handed over. But that provision has not satisfied tech and privacy groups. This isn't about moderating public posts on social media platforms. It will likely end up affecting those eventually, but the draft bill appears to allow the committee to target personal communications, which are usually private. Whether or not there are robust protections in place to strip identifying info doesn't really matter. A Congressional committee with the power to subpoena the communications of people not actually under investigation by the committee isn't the sort of thing anyone should be encouraging, no matter the rationale. Social media platforms have been doing more to address concerns of online radicalization, but their efforts never seem to satisfy political leaders. The efforts have routinely resulted in collateral damage, not the least of which is the removal of evidence of criminal activity from the internet. Moderation at scale is impossible. The imperfections of algorithms, combined with the human flaws of the thousands of moderators employed by social media platforms, has turned online moderation into a mess that satisfies no one and does harm to free speech protections. Any Congressional rep with the ability to perform a perfunctory social media search can find something to wave around in hearings about online radicalization and internet companies' unwillingness to clean up the web. It doesn't mean they're right. It just shows it's impossible to satisfy everyone. In this case, the Congressional committee appears to be targeting white nationalist extremists. Just because the target has shifted to homegrown threats doesn't make the proposal any less dangerous. Even if it never results in the subpoenaed harvesting of communications, it could still encourage the federal government (and the local agencies that work with it) to expand existing social media monitoring programs. These also utilize imperfect AI and flawed humans. And they will also result in the over-policing of content. Unfortunately, these efforts will utilize actual police, so it's not just the First Amendment being threatened. Permalink | Comments | Email This Story

Read More...
posted about 13 hours ago on techdirt
The Complete Microsoft Azure Certification Prep Bundle 2019 has four courses to help you learn all about Microsoft Azure and preparing for various certification exams. You will discover how to implement Microsoft Azure infrastructure solutions, how to integrate and secure Azure infrastructure, how to design solutions for the Microsoft Azure platform, and more. You'll also learn all of the requirements for the AZ-100, 70-533, AZ-101, AZ-103, AZ-203, AZ-300, and 70-535 certification exams. It's on sale for $29. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted about 14 hours ago on techdirt
We live in such fascinating times. We've had some posts concerning people getting (rightly) angry about Blizzard banning a top player who supported the protests in Hong Kong. In order to make the company feel more heat, apparently some pissed off players have been plotting to weaponize the GDPR and flood the company with data requests. This started with a Reddit post directly telling users that if they're upset about Blizzard's decisions regarding Hong Kong, to hit back with a GDPR request: I know a lot of people, myself included, are upset by Blizzard/Activisions spineless decision to ban Blitxchung. After personally uninstalling all of my Blizzard games, I thought, "what else can I do?". The answer, is GDPR requests. Let me explain. Under EU law, you're allowed to request all information a company has on you, along with the purpose of this information collection. What most people don't know, is that these requests are VERY hard to comply with, and can often take a companies legal group 2-7 days to complete PER REQUEST. If a company doesn't get you the information back in 30 days, they face fines and additional issues. In extreme cases, a company can request an additional 2 months to complete the requests if there is a large volume, but suffice to say, if a company gets a significant amount of requests, it can be incredibly expensive to deal with, as inevitably they will have to hire outside firms/lawyers to help out. So, if you want to submit a GDPR request, and live in the EU, you can use the following form letter.... I've actually been in the middle of investigating a different story about a possible weaponizing of the GDPR, but the details there have been a bit murkier, so it's fascinating to see things laid out so clearly here. To be clear, there does appear to be some cleverness here, though, it's true that such requests are a pain in the ass to comply with and can be costly and resource intensive. And while it may be fun and cathartic to use that power against a company like Blizzard as a way to punish it for its ridiculous stance, be clear that these kinds of weaponized GDPR requests are likely to be used against many others as well, including companies you might actually like. This is yet one more reason why, even if you support the overall goals of the GDPR, you should be very, very concerned with how the law is actually implemented. Permalink | Comments | Email This Story

Read More...
posted about 18 hours ago on techdirt
For years we've talked about how the broadband and cable industry has perfected the use of utterly bogus fees to jack up subscriber bills, a dash of financial creativity it adopted from the banking and airline industries. Countless cable and broadband companies tack on a myriad of completely bogus fees below the line, letting them advertise one rate -- then sock you with a higher rate once your bill actually arrives. Despite this being false advertising, regulators have chosen to look the other way for decades. Last week, a new study highlighted how nearly 25 percent of your cable bill is comprised of bullshit fees, netting $28 billion annually from such surcharges. This week, AT&T is under fire for a new wrinkle on an old game. The company has started raising its customers' broadband prices by as much as seven percent to help offset the company's property taxes. In this case, customers who thought they were signing up for fiber broadband at a fixed, locked rate were suddenly informed they needed to pay 7% more to help pay off AT&T's tax burden: Effective October 1, 2019, there will be an increase in the AT&T Cost Assessment Charge used to recover AT&T property taxes. The monthly rate will change from 2.92% to 7.00% of your total AT&T Business Internet, Phone and/or U-verse TV monthly charges. This charge is not a tax or fee that the government requires AT&T to collect from its customers. Again there are several problems here. One, advertising one rate then charging something else is false advertising. Two, AT&T's property taxes are the cost of doing business, and should be included above the line. Three, these users were locked in at a "fixed, guaranteed rate," then AT&T simply ignored the promise. AT&T's practice of adding its property taxes appears to have begun sometime in 2017. But there's no indication that the rates being paid actually, realistically reflect AT&T's property tax burden: AT&T has been charging the property-tax fee to business customers since at least mid-2017. An AT&T business DSL customer in Oklahoma complained about it on Reddit at the time, saying the then-new fee was 1.08% of the monthly bill. In January 2019, an AT&T customer complained in a DSLReports forum that the property-tax fee was raised from 2% to 6.69%. "So I gotta ask—did their 'property taxes' increase by 335%?" the customer wrote, noting the greater-than-three-fold increase. In a functional market either competition would kick in to punish companies for this kind of behavior in the form of subscriber exodus, or a regulator would step in to, at the very least, warn the company away from such misleading predatory behavior. But this being the United States, where the FCC just effectively neutered itself at lobbyists' behest, based on entirely manufactured justifications, and vibrant competition remains a pipe dream, we get neither option. Enjoy. Permalink | Comments | Email This Story

Read More...
posted about 21 hours ago on techdirt
At long last, it appears the UK government's porn blockade has been sunk. The government missed another deployment window in April of this year. Karl Bode reported the government was considering saying the hell with it it all a couple of months later. But even that report suggested the UK might still try to make its stupid porn blocking plan work. It claimed it just needed an indefinite amount of time to bring its porn filter into compliance with EU law -- something it had years to do but apparently only took into consideration at the last minute. The porn filtering system was to be deployed by ISPs and porn sites. Age verification would be needed to access porn from paid sites. This information would be stored for government perusal by third parties, generating a tempting honeypot of personal information tied to sexual peccadilloes that could be exploited by anyone who had to access it. You know, in addition to anyone in the government who had access to it… like criminals. In its partially-instituted form, the filtering system was alarmingly easy to circumvent. When it did work (by which I mean, when it was turned on), it didn't, resulting in over-blocking when it wasn't being beaten by a single Chrome extension. Almost completely useless. And all in a package that required UK citizens to queue up at the online porn box office and state affirmatively their desire to access pornographic content. After a half-decade of not happening, the UK government has officially ditched its porn filtering program, as Rory Cellan-Jones reports for the BBC. The government has dropped a plan to use strict age verification checks to stop under-18s viewing porn online. It said the policy, which was initially set to launch in April 2018, would "not be commencing" after repeated delays, and fears it would not work. This was the plan to force porn providers to deploy government-approved age verification processes. Those that did not would be blocked by ISPs, which apparently would be providing this vetting service to the government free of charge. Ignored in all of this were sites that did not sell access to porn, like Twitter, Reddit, and other sites where adult content is accessed freely. So, the children the government was so worried about would still have plenty of options even if this plan had worked. The Porn Blockade is dead! Long live the Porn Blockade? Digital Secretary Nicky Morgan said other measures would be deployed to achieve the same objectives. Hope springs eternal in the halls of the UK government, where impossibility can be legislated into possibility, kicked around for 48-60 months, and abandoned when it finally becomes clear to the people whose careers depend on misunderstanding the problems finally being forced to confront reality. Having mishandled everything about its end of the deal, the UK government is now leaving it up to porn sites to keep kids out. It appears to be voluntary, but the kind of "voluntary" where the person asking expects you to do it and will find some way to punish you if you don't. In a written statement issued on Wednesday, Ms Morgan said the government would not be "commencing Part 3 of the Digital Economy Act 2017 concerning age verification for online pornography". Instead, she said, porn providers would be expected to meet a new "duty of care" to improve online safety. This will be policed by a new online regulator "with strong enforcement powers to deal with non-compliance". The only entities truly upset by this turn of events are those that expected to tap into a new government-created revenue stream. OCL, one of the firms hoping to be the vendor of choice for age verification tools, expressed its "shock" that the UK government would abandon its plan to protect children from porn and, presumably, enrich OCL in the process. But nearly everyone else saw this scrapping as inevitable, considering the oh so many unworkable aspects of the filtering program. I'm sure UK citizens are thrilled the government spent nearly five years allowing its Porn Blockade to drift into the rocky shoals of reality. Since other people's money funded the losing battle, the government spared no expense and will presumably continue this spending until it has abandoned another two or three attempts over the next decade or so. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
Adobe has long had a history of questionable behavior, when it comes to the rights of its customers, and how the public is informed on all things Adobe. With the constant hammering on the concept that software it sells is licensed rather than purchased, not to mention with the move to more SaaS and cloud-based software, the company is, frankly, one of the pack leaders in consumers not actually owning what they bought. But what's happening in Venezuela is something completely different. Adobe will be disabling its services entirely in that country, announcing that it was giving customers there roughly a month to download any content stored in the cloud. After that, poof, no more official Adobe access in Venezuela. That includes access for SaaS services that were prepaid. For such prepaid services, Adobe has also announced that zero refunds will be provided. Why is this happening? According to Adobe, it's to comply with Trump's Executive Order 13884. In the document, Adobe explains: “The U.S. Government issued Executive Order 13884, the practical effect of which is to prohibit almost all transactions and services between U.S. companies, entities, and individuals in Venezuela. To remain compliant with this order, Adobe is deactivating all accounts in Venezuela.” To make matters worse, customers won’t be able to receive refunds for any purchases or outstanding subscriptions, as Adobe says that the executive order calls for “the cessation of all activity with the entities including no sales, service, support, refunds, credits, etc.” As the Verge post points out, if you're shrugging at the idea that the average Venezuelan citizen just got bilked out of money or software for which they paid, private citizens aren't the only ones who will be affected by this. NGOs and news outfits will likewise be impacted by the move and those are some of the organizations attempting to affect change in Venezuela. If nothing else, this should highlight just how risky engaging in SaaS-style tech service has become. It's one thing to pay your money and not actually own what you've bought. It's quite another to pay that money, not own what you bought, and not get your money back when you don't even get that thing you don't own at all -- because of international politics. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
Just a couple months back we wrote about YouTube suing a guy for trying to extort YouTubers with bogus DMCA notices. The evidence was pretty damning that Christopher Brady had been harassing and demanding money from various YouTubers and using the threat of bogus DMCA notices (which could kill someone's account) for leverage. The complaint also suggested that Brady was looking to swat some YouTubers as well. As we noted in our original post, the case hinged on Section 512(f) of the DMCA, which was supposed to be the tool to prevent false takedown notices -- but, which in practice is effectively a dead letter, as 512(f) claims rarely go anywhere. If there was some hope that a case with the facts so blatant might breathe new life into 512(f), well, that ended quickly as Brady has wasted no time at all in agreeing to settle the case. The settlement is pretty straightforward. Brady agrees not to send any more bogus DMCA notices to YouTube and also agrees not to "misrepresent or mask" his identity on any Google property. He also agreed to pay $25,000 to Google, which probably about covers their legal bills for bringing this case. Brady also released an apology statement, which suggests he may have sent more bogus DMCA notices than were included in the lawsuit. “I, Christopher L. Brady, admit that I sent dozens of notices to YouTube falsely claiming that material uploaded by YouTube users infringed my copyrights. I apologize to the YouTube users that I directly impacted by my actions, to the YouTube community, and to YouTube itself.” Of course, while it's good to see such an apology and settlement, it still doesn't change the fact that bogus DMCA notices happen all the time. While Brady may have been more extreme and more blatant than most, there's still a huge problem with a law that creates a situation that mere accusation will often get content removed. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
Order your copy of Working Futures today » Over the last few weeks we've been writing about all of the various aspects of the stories in our Working Futures anthology of 14 science or speculative fiction stories all relating to the "future of work." We've been getting great feedback on the book so far and are excited with how many people have been reading it. If you haven't yet, please check it out as well -- and support Techdirt in the process. Here are all the posts summarizing the stories in the book: Welcome to Working Futures The future of work is likely to be complicated The future of work will have unexpected consequences The future of work will blur the line between humans and machines The future of work may be beautiful Since we've now written about all the various stories in the book, for today's post, I wanted to talk a bit about the custom deck of cards we developed and used as part of the process in developing these stories. As I've mentioned in some of the previous posts, much of the impetus behind the Working Futures project was to try to think through the actual implications of technology on jobs and labor in the future. There has been a lot of fretting and a lot of hand-waving, but little exploration of what might actually happen. That's not a surprise, because predicting the future is mostly impossible, especially when it comes to complex systems. However, one tool that has been really useful not for "predicting" the future, but for exploring multiple possible futures is scenario planning, which is frequently described as a structured way for groups to think about possible futures. It's not about predicting which future will happen, but to explore various trends, driving forces and the like to determine a few different possible futures -- and to explore the implications of each. We wanted to use this process as a starting point for the Working Futures project, but with a bit of a twist. We first polled people online via Techdirt for what they thought were the key driving forces that would impact the future of work -- rating both how much of an impact you thought they'd have and the likelihood that that force would have a true impact. From that, we developed a custom deck of cards showing different aspects of each key driving force on each side. So, for example, one side might show what a world would look like if genetic engineering becomes a much bigger deal, gets cheaper, and is used much more widely -- while the flip side of that same card says that genetic engineering has more or less stalled out, and it remains limited to labs, and bigger breakthroughs remain limited. We used that deck with a group of about 50 people from a variety of different perspectives for an all day session in San Francisco. Attending were journalists, technologists, labor activists, human rights activists, entrepreneurs, philanthropists, academics, lawyers, investors, writers, economists, and more. We had them go through a series of exercises to develop a set of 10 different scenarios, which we then gave to the writers who contributed stories to this collection. A key feature in the scenarios was coming up with four or five "media headlines" that would appear in that world. Traditional scenario planning folks will likely balk at the idea of producing ten scenarios -- as it's typical to develop just three or four. However, in this case, we thought it made sense, as the goal here was to give authors a variety of different starting points (and I think for that purpose, the system worked quite well). Either way, in showing the deck of cards to people after the event, we kept hearing over and over again how it might be useful for other scenario planning and strategic planning efforts as well, and people started asking us if they could buy a copy. We've now offered up the Working Futures cards via GameCrafter as a print-on-demand option for $19.99. They include an instruction card that basically describes how we used them and how you might use them as well, but there are lots of ways to make use of them, limited only by your imagination. With this project we've been mainly focused on the book and the 14 stories -- which was always the end product of the plan. However, we've been pleasantly surprised by how many people have also picked up the cards as well and let us know how useful they've been for various other scenario and strategic planning efforts. If you think they might be useful for you as well, please check them out. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
Millions of people around the globe are using blogging services and social media platforms created by US companies to communicate with each other. Unfortunately, these US companies have been helping censorial governments shut their citizens up by complying with a large variety of content removal requests. While it is generally a best practice to follow local laws when offering services in foreign countries, it's always disappointing when US companies respect laws that have been created solely for the purpose of stifling dissent, silencing critics, and putting marginalized people at the risk of even greater harm. Paul Bischoff of Comparitech has compiled information from a number of companies' transparency reports to produce an easily-readable snapshot of worldwide censorship as enabled by US tech companies. And the countries you'd expect to be demanding the censorship of the most content are the ones you'll see taking top spots at various platforms. Russia, Turkey, and India all top the charts, both in the number of demands made and the actual amount of memory-holed content. Russia must be home to one of the last large Blogger userbases, considering how often the country targets this platform. Russia alone accounted for 53% of the 115,000 removal requests received by Google, which also covers search engine listings and YouTube. Russia's takedown demands have been steadily escalating over the past half-decade, jumping from 2,761 in 2015 to 19,192 in the first half of 2018 alone. Most of Russia's requests are supposedly "national security" related, but that still leaves plenty to spread around to cover other things the government disapproves of, like nudity, drug abuse, and defamation. Turkey comes in at a very distant second. It too likes to claim stuff is either defamation or a threat to national security, but it prefers to perform its vicarious censorship on a different social media platform: Twitter. Turkey jumps into the top spot here, accounting for 55.23 percent of the overall number of requests (54,652). Russia is a distant second with 21.17 percent of the overall number. But Russia is gaining ground… [T]he largest number of content removal requests came last year with 23,464 (an 84% increase on the previous year). [...]Russia and Turkey... made up 21.25 and 59.67 percent of the requests in 2018, respectively. Yes, Twitter is Turkey's playground. The easily-offended head of state (and all of his easily-offended officials) love to use content removal requests to silence critics and bury unflattering coverage. Unfortunately, Twitter has been all too helpful when it comes to Turkey oppressing its citizens via third parties. Sure, much of the blocking only affects Turkey, but that's where dissenting views are needed the most. Bischoff's report is worth reading in full. It breaks down the raw data of transparency reports into easily-digestible chunks that show which platforms which countries censor most, as well as the type of complaints these countries are sending most often. You'll also see why one of the biggest censors in the world barely shows up in these reports. China doesn't need third parties' help to control what its citizens see online. It begins this censorship at home by blocking content across multiple platforms (and, often, the platforms themselves), some of which are homegrown services far more popular with Chinese users than their American equivalents. A lack of data doesn't mean China is taking a hands-off approach to content moderation. It simply means the Chinese government rarely has to put its hands on anything outside the country to achieve its aims. One of the more minor players in the global takedown playground is Wikimedia. Outside of the occasional DMCA takedown request, Wikimedia rarely gets hassled by anyone, much less world governments. But the requests it does get are far weirder than the run-of-the-mill censor-by-proxy requests delivered to social media platforms. Wikimedia is one of the few American entities that has told the Turkish government to beat it when Turkey asked for negative (but apparently factual) content to be removed. It also had to explain to members of an unnamed political party how Wikipedia -- and the First Amendment -- actually work. A lawyer reached out to us on behalf of a lesser-known North American political party that was unhappy with edits to English Wikipedia articles about the party and one of its leaders. Her clients apparently wanted previous, more promotional versions of the articles restored in place of the later versions. To better engage in discussions with the community, we encouraged them to familiarize themselves with Wikipedia’s recommendations on style and tone and the policy restricting use of promotional language. We also advised that one of the best ways to resolve their concerns is to engage with the community directly. And it has only removed one piece of content ever that wasn't the result of a valid DMCA takedown request: According to Wikimedia, a blogger visiting Burma/Myanmar posted a redacted photo of his visa on his website. Somehow, a version of his visa picture without his personal information removed ended up on an English Wikipedia article concerning the country’s visa policy. “He wrote to us, asking to remove the photo,” wrote Wikimedia. “Given the nature of the information and the circumstances of how it was exposed, we took the image down.” Tech advances have accelerated the pace of global censorship. When you're dealing with the world's greatest communication tool -- the internet -- you kind of have to take the good with the bad. Geoblocking content to stay in the good graces of foreign governments may seem like the "lesser of several evils" approach, but even if it's the approach that will result in the least amount of collateral damage, it's still something that encourages authoritarians to continue being authoritarian. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
The Canadian Broadcast Company is back on its copyright bullshit. The publicly-funded broadcaster sure seems to enjoy the benefits of fair dealing -- Canada's fair use counterpart -- but it doesn't seem to like others availing themselves of same copyright exception while using clips from CBC broadcasts. Over the years, the CBC has made some extremely dubious copyright-related claims. It tried to enforce its self-crafted licensing terms to forbid anyone from quoting CBC broadcasts and publications without its explicit permission. It backtracked pretty quickly when everyone chose to ignore its stupid policy and its petty demands for licensing fees. It granted an exception to "bloggers," whatever that means, and then quietly stopped griping about licensing fees. A few years later, it made Techdirt headlines again by threatening podcast apps for "rebroadcasting" its podcasts -- something accomplished by the apps utilizing the CBC's podcast RSS feed. In essence, the threat letters claimed the loading of a URL into a podcast app violated CBC's copyright. It was pretty much the same thing as claiming Google violated CBC's copyrights by showing CBC URLs in its search results. Once again, everyone shrugged off the CBC's idiocy and returned to their daily business of not actually violating CBC's copyrights. Here it comes again. Only this time there's a lawsuit attached, so it's going to be a bit tougher to shrug it off. Michael Geist reports the CBC is suing the Conservative Party of Canada for including short CBC clips in its YouTube videos. The CBC has filed a copyright infringement lawsuit against the Conservative Party over the use of clips on its Not As Advertised website and the use of debate clips on its Twitter feed. The lawsuit, filed yesterday in federal court, claims that a campaign video titled “Look at What We’ve Done” contained multiple excerpts from CBC programming in violation of copyright law. Moreover, the CBC also cites tweets that included short video clips of between 21 seconds and 42 seconds from the English-language leaders’ debate. The CBC argues that posting those clips on Twitter also constitutes copyright infringement. The CBC does not appear to be in the right here. ([extremely Fry face] not sure if pun intended or not.) Fair dealing would seem to allow this use of CBC clips, especially in the context of creating videos criticizing liberal politicians. What appears to be driving this case is the CBC's dislike for the Conservative Party, rather than any solid legal footing. Even the isolated clips embedded in Conservative Party tweets could be considered fair dealing since their use (21-42 seconds each) is clearly minimal and does not devalue the content nor prevent CBC from monetizing its copyrighted content. That being said, Canada's fair dealing exception might have to tangle with another law, one pertaining to fair coverage of elections. If so, the Conservative Party might come out on the losing end because it edited clips to conform to its partisan narrative, rather than simply distribute unedited footage from CBC programming. From the lawsuit [PDF]: The respondents' use of copyright-protected material in the Infringing Material diminishes the reputation of CBC/Radio-Canada, its journalists and producers, and takes advantage of their respected integrity and independence in a way that undermines public confidence in Canada's national public broadcaster at a critical time: during a national election campaign in which their coverage must be seen, more than ever, as trustworthy, independent and non-partisan. Selectively editing various news items together to present a sensational and one-sided perspective against one particular political party may leave a viewer with the impression that CBC/Radio-Canada is biased, contrary to its obligations under the Broadcasting Act. This is a stretch. This assumes people not familiar with the Conservative Party will assume the edited clips were assembled by the CBC to make certain politicians look bad. That these edited clips would most likely be found with the Conservative Party's name attached makes it far less likely the uninitiated will view these as a partisan hack job performed by the publicly-funded CBC. This argument flows directly into the CBC's claim of violated moral rights. Supposedly, the edited clips have "damaged" the reputation of the broadcast's producers and journalists, turning them into mouthpieces of a partisan group. In an era where politicians are even quicker to claim journalists are partisan purveyors of fake news, it's a legitimate concern. But it's also overblown in this context, where CBC clips are being used to highlight statements made by politicians, rather than by CBC journalists. But here's the ultimate concern: the CBC is acting against its own interest by engaging in litigation that could further narrow the scope of the fair dealing exception. The CBC is definitely a beneficiary of fair dealing as it allows CBC to assemble broadcasts using a variety of sources without having to worry too much about being sued for copyright violations. This is an extremely short-sighted move that may pay off in ways the CBC doesn't particularly like, even if it secures a judgment against the Conservative Party. Be careful what you sue for. You just might get it. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
Luminar 3 is the fastest way to make your photos stand out. A cutting-edge photography software, Luminar boasts many innovative features. With Accent AI 2.0, you can make dozens of adjustments using one slider. The new technology recognizes people and applies adjustments selectively for more realistic results. You can enhance the skies in your photos with AI Sky Enhancer. Built for your artistic vision, Luminar 3 offers over 70 instant looks hand-crafted by pro photographers. It's on sale for $29. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
I like Salesforce founder and CEO Marc Benioff, in part because he doesn't act like most tech CEOs and isn't afraid to speak his mind and actually sound fairly human, rather than a rehearsed automaton who has a gazillion PR people vetting every message. That doesn't mean I always (or even often) agree with him, but I appreciate his willingness to speak his mind. I'm even not that surprised that he's jumped on the bandwagon in calling for Facebook to be broken up, even though the reasons he cites are based on false statements that he's apparently been convinced are true (which... maybe is a little scary) or that it still remains totally unclear to me how breaking up Facebook fixes any of the problems discussed by supporters of such a plan (unless the problem is just "I don't want Facebook to exist."). Benioff also, oddly, seems unfamiliar with how the 1st Amendment works, in claiming that Congress needs to make it against the law to lie in political ads: In particular, Benioff took issue with Facebook's recent decision to run political advertisements from the Trump campaign which contain false claims. Benioff said there is "no question" that he would not run such advertisements if he were the head of Facebook, and called on Congress to pass legislation that would require truthful advertising on social media platforms. There are, of course, truth in advertising laws, but those focus on commercial speech, which has been deemed to be moderately less protected under the 1st Amendment. Political speech, on the other hand, is at the top of the protected food chain, and there's no way that the Supreme Court would bless any law from Congress that would "require truthful advertising on social media platforms." But what strikes me as really bizarre is that as part of his ill-informed attack on Facebook, he's decided to throw Section 230 of the Communications Decency Act (and, with it, free speech on the internet) under the bus at the exact moment that his own company is heavily relying on Section 230 to get out of a massive lawsuit. In that same CNN interview which generated the headlines about breaking up Facebook, he briefly addresses Section 230 as well: One of the reasons tech platforms are able to publish such content without consequences is the law typically shorthanded as Section 230, which allows internet platform providers to moderate some content without fear of being held liable for most of what users do on their platforms. Benioff called Section 230 "the most dangerous law on the books right now," and said it should be "abolished." He reiterated this statement on Twitter -- a site that literally only exists because of Section 230. Facebook is a publisher. They need to be held accountable for propaganda on their platform. We must have standards & practices decided by law. FB is the new cigarettes—it’s addictive, bad for us, & our kids are being drawn in. We need to abolish section 230 Indemnifying them. pic.twitter.com/OHVDVVd1jt — Marc Benioff (@Benioff) October 16, 2019 This is wrong and ridiculous on many levels. Section 230 doesn't "indemnify" Facebook, it makes sure that legal liability is properly placed on the party doing the speaking, not the party hosting the speech. And if anyone should know that, it's Marc Benioff right this very moment. As we detailed earlier this year, Salesforce is currently being sued for sex trafficking by a group of people who were trafficked on Backpage... because Backpage used Salesforce to track its customers. And, the key argument that Salesforce's very expensive lawyers are making... is that Salesforce is protected by CDA 230: If you're unable to read that it, it shows that the core argument Salesforce is making is that it's protected by Section 230. That's in the legal filing the company made literally one month ago. In it, Benioff's lawyers highlight the importance of Section 230, and also point out that -- despite the claims of the plaintiffs in the case -- the protections of 230 clearly do not apply to content provided by 3rd parties on a platform such as Salesforce (or... Facebook). I'm not sure if Benioff is so confused that he doesn't understand how Section 230 works, or if he's just uninformed. The fact that he uses the phrase "Facebook is a publisher" suggests, unfortunately, that he's been reading/hearing some of the nonsense about "platform v. publisher" -- a distinction that is not found in the law, but is often played up by online trolls and cranks. I thought Benioff was better than an online troll or crank, but this latest outburst suggests I may have overestimated him. Salesforce should be able to get this sex trafficking case tossed on 230 grounds, and Section 230 is a key reason why "the cloud" and cloud services -- a space Benioff helped pioneer -- exist. For him to trash 230 and call for it to be abolished is crazy. If anything, an argument can be made that Benioff is savvy enough to know that he can afford expensive lawyers from Gibson, Dunn & Crutcher to get him out of sticky situations based on what the various Salesforce customers have done with his services -- but the many growing SaaS competitors creeping into his market... probably can't. And, hey, abusing the law to block and harm competitors. Why that sounds like an antitrust problem... Permalink | Comments | Email This Story

Read More...
posted 1 day ago on techdirt
Despite the obvious fraud and false data used to prop up the move, a court recently backed much of the Ajit Pai's repeal of net neutrality. But it wasn't all champagne and roses for Ajit Pai and his friends in the telecom sector. The court also shot down the FCC's attempt to ban states from protecting net neutrality themselves, pointing out that when the FCC obliterated its Title II authority over broadband providers (at lobbyist behest) it also eliminated any potential right to tell states what they can or can't do. As such, states are rolling forward with exploring new rules and finally enforcing existing ones. More than two dozen states have examined some form of net neutrality protections in the wake of the repeal. Most notable are California and Washington State. California has been sued by the DOJ (not coincidentally run by former Verizon lawyer Bill Barr) for trying to protect consumers, an effort complicated by this court ruling. Washington wasn't sued, and is moving full speed ahead when it comes to implementing the rules: "Broadband users in Washington State can file net neutrality complaints against ISPs using this general consumer complaint form, a spokesperson for Washington Attorney General Bob Ferguson told Ars. The AG's office said it wouldn't comment on whether there are any pending net neutrality investigations. The text of the Washington law is available on the state's website here. Violations of the law are punishable under Washington's Consumer Protection Act." We've noted a few times how folks crowing about how net neutrality must not have mattered because the internet didn't explode are only advertising their own ignorance. For one, the repeal did much more than just kill net neutrality. It effectively gutted the FCC's authority over telecom giants, shoveling any remaining responsibility to an FTC that lacks the authority or resources to stand up to giants like AT&T and Comcast (that was the entire point of the gambit). As such, crowing that the internet didn't implode ignores how a void in federal oversight will make a wide variety of non-net-neutrality related issues (high prices, sneaky fees, misleading coverage maps, anti-competitive behavior) worse. Some otherwise bright folks remain under the false impression that eliminating telecom oversight magically results in connectivity Utopia. But when the FCC abdicates its authority over natural monopolies like Comcast and AT&T, existing problems simply get worse. There are decades of data (and endless customer satisfaction surveys) making this point. Most ISPs (with some notable exceptions) have been hesitant to start screwing users and competitors for fear of running afoul of state laws. And as more states (like Minnesota) contemplate tougher rules to fill the void, that's going to continue. Industry lobbyists love to complain about the "fractured regulatory obligations" they now face, but that was a product of their own creation when they decided to spend millions to kill fairly modest (by international standards) consumer protections. The telecom industry made this mess, and now it gets to stew in it until either Congress or the FCC restores federal guidelines. That's why, just as we're seeing on the privacy front, the big industry push moving forward will be to express phony support for a federal net neutrality law their lawyers will write. A law they pretend is a "solution" to the problem but contains so many loopholes as to be effectively worthless. Its only real purpose? To pre-empt tougher federal or state guidelines. Permalink | Comments | Email This Story

Read More...
posted 2 days ago on techdirt
It's no secret that China is tightening its control of every aspect of the online world -- Techdirt has been reporting on the saga for years. But what may not be so clear is how China is doing this. It is not, as many might think, the direct result of diktats from on high, but flows naturally from a massive program of carefully-crafted laws and new government initiatives created with the specific intent of making the online world subservient to the Chinese authorities. Central to this approach is a law passed three years ago, generally known in the West as "China's cybersecurity law". A review of the law in 2017, by the New America think tank, brought some useful clarity to complicated political landscape. It names a number of powerful players involved, including the Cyberspace Administration of China, the Ministry of Public Security, the Ministry of Industry and Information Technology, the country's military and intelligence establishment, and BAT -- Baidu, Alibaba, Tencent -- China's Internet giants. The legal framework is also complex. The 2017 article picks out six "systems": the Internet Information Content Management System; the Cybersecurity Multi-Level Protection System; the Critical Information Infrastructure (CII) Security Protection System; the Personal Information and Important Data Protection System; the Network Products and Services Management System; and the Cybersecurity Incident Management System. Clearly, there's a huge amount of activity in this area. But because of the many interlocking and interacting elements contributing to the overall complexity, it's hard to discern what's key, and what it will all mean in practice. A 2018 report on the law from the Center for Strategic & International Studies noted that one of the systems -- the Multi-Level Protection System (MLPS) -- has a far wider reach than its rather bland name implies: MLPS ranks from 1-5 the ICT networks and systems that make up China's CII based on national security, with Level 5 deemed the most sensitive. Level 3 or above triggered a suite of regulatory requirements for ICT products and services sold into that CII, including indigenous Chinese IP in products, product submission to government testing labs for certification, and compliance with encryption rules banning foreign encryption technology. That in itself is not surprising. Governments generally want to know that a country's digital infrastructure can be trusted. However, it turns out these rules will apply to any company doing business in China: MLPS 2.0 will cover any industry with ICT infrastructure because it covers the vague category called "network operators," which can include anyone who uses an ICT system. MLPS 2.0 also appears to have a focus on cloud computing, mobile internet, and big data. That extremely broad reach has been confirmed following the recent appointment of a big data expert to oversee the implementation of MLPS. The China Law Blog has analyzed several Chinese-language articles giving details of this move, and what emerges will be deeply troubling for any foreign business operating in China: This system will apply to foreign owned companies in China on the same basis as to all Chinese persons, entities or individuals. No information contained on any server located within China will be exempted from this full coverage program. No communication from or to China will be exempted. There will be no secrets. No VPNs. No private or encrypted messages. No anonymous online accounts. No trade secrets. No confidential data. Any and all data will be available and open to the Chinese government. As the China Law Blog explains, this means that there will be important knock-on consequences: Under the new Chinese system, trade secrets are not permitted. This means that U.S. and EU companies operating in China will now need to assume any "secret" they seek to maintain on a server or network in China will automatically become available to the Chinese government and then to all of their Chinese government controlled competitors in China, including the Chinese military. This includes phone calls, emails, WeChat messages and any other form of electronic communication. As previous Techdirt posts have reported, China has been steadily moving in this direction for years. Nonetheless, seeing the endgame of the authorities -- unchecked access to everything flowing through Chinese networks -- confirmed is still troubling. The intentions are now clear, but a key unanswered question is how rigorously the strategy will be enforced. The situation for social media censorship in China gives some grounds for hope. An article on the Asia Dialogue site explains: Despite the broad and still expanding legal framework, the actual implementation of China’s information control is neither monolithic nor consistent. While the Chinese government is increasingly adept at managing and using new media and advanced technologies to its advantage, it also relies heavily on private companies to carry out government directives on a daily basis. The same may be true for the implementation of MLPS 2.0 in particular, and China's cybersecurity law in general. If it isn't, Western companies are likely to find operating in the country even more difficult than it is now, when it is hardly plain sailing. Follow me @glynmoody on Twitter, Diaspora, or Mastodon. Permalink | Comments | Email This Story

Read More...
posted 2 days ago on techdirt
Remote play capability for the Playstation 4 has been something of a twisted, never ending saga. One of the most useful features of the gaming console, Sony has jealously guarded the ability to play its flagship console remotely on all kinds of devices. Originally, the only way you could connect to your PS4 was if you bought a Playstation Vita, a product all but abandoned at this point, or a Sony Xperia Android phone, a line of products the public almost universally ignored. When tinkerers on the internet went about making their own remote play apps that would work with Android phones and PCs, Sony worked tirelessly to update the console firmware to break those homebrew apps. Then Sony came out with its own PC remote play app. Subsequently, some months ago, Sony released remote play functionality for iOS devices only. The explanation at the time was that Sony was likely still trying to push Xperia phones, despite the complete lack of traction. And now, unceremoniously via yet another firmware update, Sony has given up the game and enabled remote play for all Android devices. Fortunately, 7.0 expanded the feature, making it compatible with most Android devices. This means that anyone with an Android-compatible phone in their pocket can play PlayStation 4 games on the go. The new update also coincides with a small quality-of-life patch for iOS remote play, the game streaming app itself having been available since March of this year on the platform. Now, the post goes on to note that there are some aspects of the remote play app that are janky, some of which weren't issues with the homebrew Android app. But the more frustrating aspect is just how long a walk Sony took in getting here. Again, enabling more remote play functionality for the PS 4 makes the console more valuable. It could have been used as a selling point for the PS4, an already immensely popular device, rather than remote play being used as selling points for the Vita and Xperia phones, which were barely adopted by the public. And what was with the odd steps in enabling all of this? Sony already had a working Android app when it decided to release remote play for iOS first, sitting on the Android version it already had for several months, seemingly for no reason. The source post calls this what it was: a hostage situation. That said, it’s nice to see Sony finally give up on the remote play first-party hostage situation they’ve kept up for most of the generation. With Apple Arcade, Xbox Game Pass, and Google Stadia all making moves, gaming is once again shifting away from the television, and Sony is smart to make an attempt to capitalize on this trend. This console generation may be swiftly coming to an end, but this may indicate that features of this sort will be available on day one when the PS5 drops next December. You would really, really hope that Sony wouldn't have to learn this lesson all over again with the Playstation 5. On the other hand, it is Sony. Permalink | Comments | Email This Story

Read More...
posted 2 days ago on techdirt
The Portland Police Department's Review Board -- a board composed almost completely of police and government officials -- concluded it's OK for a cop to lie about the law to shut down recordings. Police officers seem to struggle the most when it comes to understanding the rights and protections given to citizens. For years, officers have abused any number of inapplicable laws to arrest citizens who recorded them. When laws and policies were changed in response to court decisions, the abuse of laws continued. The only thing that changed were department policies, which some officers just decided to ignore. This hasn't always worked out well for officers, who often end up in court with their immunity stripped. Those that don't progress as far as the federal court system, however, are left in the hands of local complaint review boards. Even when the board is more independent than Portland's, board recommendations for punishment are often ignored in favor of minimal or no discipline. This case, covered by The Oregonian following the release of Police Review Board records, shows an officer knowingly lied about the law and got away with it. The bureau’s Police Review Board found Sgt. Erin Smith didn’t knowingly violate the police directive on truthfulness. Not even with the lying? The sergeant acknowledged he misrepresented the law to get Kerensa to stop videotaping him during a Nov. 30, 2016, demonstration in front of fuel storage facilities in Northwest Portland over the Dakota Access Pipeline. Smith admitted to falsely telling Kerensa that he didn't have the right to film officers and threatened Kerensa that he could be arrested if he didn’t stop. So, how does an officer lie without violating a policy directive on "truthfulness?" As it turns out, there are a few convenient exceptions to this directive. First, officers are allowed to use deception for "legitimate law enforcement purposes." But telling someone the law forbade them from filming cops isn't a "legitimate law enforcement purpose." That's the conclusion Portland PD Police Chief Danielle Outlaw (yes, that's her real name) reached. But she said this was more an issue of performance than a truthfulness violation because the officer admitted to lying about the law. Half-credit, I guess. The officer's direct supervisor was even more charitable. Smith’s supervisor, Traffic Capt. Stephanie Lourenco, found Smith’s deception was permitted under an exception in the policy that says deception is permitted when “necessary to protect the physical safety’’ of an officer. Lourenco did not explain how a passive recording threatened the officer's safety. The generous application of the deception exception encourages officers to invoke it any time they lie to citizens to get them to comply with unlawful orders. Good times. Thank god the PD is engaged in some form of oversight. Otherwise, we might be subjected to even stupider rationalizations... [Board members] argued that Smith didn’t knowingly violate the directive and that “deception’’ is an acceptable de-escalation tactic. Even assuming this was the sort of situation that necessitated a de-escalation, how does lying to people result in calmer interactions? Feeding a line of bullshit to a citizen who knows it's bullshit isn't going to nudge anything towards a more peaceful resolution. Making it a practice to lie to citizens just because you know multiple exceptions allow you to doesn't do anything to improve officers' relationships with the people they serve. Fortunately, this exoneration got a second pass from the city's far more independent Citizen Review Committee, which was thoroughly unimpressed with the PRB's logic. Chief Outlaw agreed to take a second look at the case the PRB had refused to act on. But in the end, lying to citizens about their right to record is only worth about one day's pay. Cops willing to spin the Wheel O' Accountability may find it pays off more often than not, especially when the PRB is willing to make almost any excuse for an officer's bad behavior. Permalink | Comments | Email This Story

Read More...
posted 2 days ago on techdirt
Back in February, you might recall that Google took some heat from owners of their Nest home security platform, after they suddenly discovered that the Nest Secure home security base station contained a hidden microphone the company had never publicly disclosed. The reveal came via a Google announcement sent to Nest customers informing them the hidden mic would soon be turned on, allowing the integration of Google Assistant on the platform. Given tech's shaky history on privacy, some folks were understandably not amused: This is not “messing up.” This is deliberately misleading and lying to your customers about your product. https://t.co/FZcf55L1bU — Eva (@evacide) February 21, 2019 While Google ultimately admitted the "error" and updated its hardware spec sheet, the episode did a nice job illustrating the fact that whether we're talking about products getting better or worse, you don't really own the products you buy, and your agreement with the manufacturer in the firmware-update era can pivot on a dime, often with far less disclosure than we saw here, or none whatsoever. When it comes to privacy (especially given the flimsy security in many IOT devices), that's kind of an important conversation to be having. Likely responding to the resulting fracas, Senator Cory Gardner has introduced the Protecting Privacy in our Homes Act, which would require tech companies to include a label on products disclosing the presence of recording devices. Gardner's been trying to shore up the internet of broken things for a few years now, though the efforts usually stall in process and his IOT Cybersecurity Act, introduced last Spring, has struggled to gain much traction in a distracted and well lobbied Congress. Says Gardner of this latest effort: "Consumers face a number of challenges when it comes to their privacy, but they shouldn’t have a challenge figuring out if a device they buy has a camera or microphone embedded into it. This legislation is about consumer information, consumer empowerment, and making sure we’re doing everything we can to protect consumer privacy." Outside of legislation, there's not a whole lot being done to ensure the millions of devices we've connected to the internet annually have reasonable security and privacy safeguards in general. Like so many issues, the IOT industry doesn't much care -- they're on to selling the next greatest thing and have little interest in retroactive security and privacy updates. Consumers often don't care -- in part because they're completely clueless to the scope of the problem (especially if functionality is hidden). And lobbying ensures government usually doesn't much care either. That has left much of the problem in the laps of consumer groups, researchers, and activists, though many of these efforts (like Consumer Reports quest to shame companies for bad security and privacy practices in product reviews) can only accomplish so much without industry and government's help. Ultimately this just means we're going to see a lot more hacking, privacy violations, and related scandals (and even potentially tragedies) before we start taking the problem of IOT privacy, security, and transparency seriously. Permalink | Comments | Email This Story

Read More...
posted 2 days ago on techdirt
You've got to be a special kind of law enforcement officer to have two lawsuits filed against you in the same day. Hamilton County Deputy Daniel Wilkey is that kind of special. The Tennessee law enforcement officer managed to violate rights against enough people that two of them retained lawyers. This suggests Deputy Wilkey violates rights on a regular basis, but maybe not egregiously enough to merit a lawsuit in every case. Both cases here are disturbing. And they're disturbing in very different ways. I've never read a civil rights lawsuit against an officer that included claims of a forcible religious experience, but here we are. (h/t Peter Bonilla) The first lawsuit [PDF], filed by Shandle Riley, alleges that Deputy Wilkey followed her to a friend's house from a nearby gas station. Once he had (sort of) pulled her over, things got weird quick. First, Deputy Wilkey claimed Riley was holding meth. To prove this, he engaged in a full body patdown. Then he ordered her to take off her bra and "shake her bra and shirt" to prove she hadn't stashed any meth there. Riley asked for a female officer to be present during this "search" but the deputy told her the law doesn't require female cops to search female citizens. He then asked if she had anything illegal in her car. She said she had a marijuana roach stashed in a pack of cigarettes. At that point, Deputy Wilkey became verbally abusive. Then he decided to strike a deal with the alleged criminal. We'll go to the lawsuit for that because… well, it offers the driest recounting of a positively insane situation. Wilkey then approached Plaintiff and asked her if she was "saved" and believed in Jesus Christ. Plaintiff stated that she believed in Jesus Christ, but that she was not "saved" by her own choice. Wilkey then told Plaintiff that God was talking to him during the vehicle search, and Wilkey felt the Lord wanted him to baptize the Plaintiff. Wilkey further told Plaintiff that he felt "the spirit." Um. Do what now? These are words coming from the mouth of a sworn peace officer. And that's not the end of it. The option given to Riley was to participate in this highly-unconventional baptism presided over by an officer of the law or get thrown into the gaping maw of the criminal justice system with as much force as Deputy Wilkey could muster. If Riley agreed to a baptism, Wilkey said he would only cite her for marijuana possession and speak to the judge on her behalf. Riley complied with Wilkey's demands, which included grabbing towels from her friends house and following Wilkey's cruiser out to a nearby lake. At the lake, Riley and Wilkey were joined by Deputy Jacob Goforth, who did nothing as Wilkey proceeded with the "baptism." Wilkey told Plaintiff that Goforth was present because, in order for a baptism to be valid, a witness must "attest" to the ritual. Wilkey then stripped nearly naked, with only his boxer shorts on. Wilkey then gave Plaintiff the option to strip too, but Plaintiff declined. Wilkey then lead Plaintiff into the near waist deep and frigid water, placed one hand on Plaintiff's back, and his other hand on Plaintiff's breasts, and completely submerged Plaintiff under the water. Wilkey held Plaintiff under water for several moments, then with his hands still positioned on her back and breasts, raised Plaintiff from the cold water. Plaintiff was shivering uncontrollably, and felt horribly violated. Unfortunately for Riley, I doubt there's a case on point that will easily eliminate Wilkey's qualified immunity defense. But hopefully, the court will recognize this is batshit insane enough it doesn't need to find a case on point to find Wilkey violated her rights. To top it all off, Riley held up her end of the under-the-color-of-law bargain. Deputy Wilkey did not. At no time did Wilkey ever [go to] court on Plaintiff's behalf and speak to the judge. If that was the only thing Wilkey was being sued about, it would be enough to question his fitness for duty. But as you already know, this isn't the end of the accusations against the deputy. The second lawsuit, filed in the same court on the same day, alleges Deputy Wilkey engaged in a suspicionless stop that turned into an impromptu roadside anal cavity search and the beating of a handcuffed man. And oh my god does it start with one of the dumbest things an officer has ever said to defend a pretextual stop. From the lawsuit [PDF]: Wilkey followed Plaintiffs, and conducted a traffic stop of the Plaintiffs on the false claims of "window tint violation" and that he could smell the odor of marijuana as Wilkey followed the plaintiffs. This assertion of Wilkey's exceptional olfactory senses is followed by a parade of brutalities inflicted on the passenger of the pulled-over vehicle at the hands of the deputy. Fortunately for the plaintiffs, this whole interaction was recorded. Here's the lawsuit's description of those events: Wilkey handcuffed James, and the individual Defendants took James to the front of one of their police vehicles. Wilkey then began to grab James' genitals. When James told Wilkey that James had an untreated and large hernia and that Wilkey's actions were causing James pain, Brewer and Wilkey jerked James' arms high above his back, and slammed James face-down onto the hot engine hood, causing injury to James. Wilkey and Brewer then beat James with fists, knees, and feet, slammed James to the ground, and continued their brutalization of James. Wilkey and Brewer then removed James' pants and shoes, while still beating James. Wilkey and Brewer then forced James' face back onto the hot hood of the same police vehicle and continued to jerk his arms high above his back, and beat James. While Brewer continued to force James' face back onto the hot hood of the same police vehicle and jerk his arms high above his back Wilkey donned a set of gloves, pulled down James' underwear, and conducted an anal cavity search of James. The lawsuit goes on to note that James suffered numerous injuries including "tearing of his anus" and an aggravation of his existing hernia. The charges brought against James (the deputies discovered drugs in his underwear) were all dropped after the dashcam video was made public. Deputy Wilkey has been suspended, but it's the nice kind that means he'll be paid to do nothing while the Sheriff's Office decides what to do with him. It would seem obvious he's too expensive to keep around. Permalink | Comments | Email This Story

Read More...
posted 3 days ago on techdirt
Even when you're shelling out thousands of dollars for the latest smartphone and an "unlimited" data plan for it to run on, that cost expenditure still puts you at great privacy risk. Wireless carriers, for years, have collected and sold your location and other data to a long line of dubious middlemen, and despite a lot of sound and fury on this subject, few (outside of maybe the EFF) are really doing much about it. And with the FCC recently having self-immolated at lobbyist request and any new meaningful privacy protections derailed by bickering, that's not changing anytime soon. Less discussed is the privacy nightmare you'll find in "discounted" phones designed to help "bridge the digital divide." While numerous vendors and tech giants have cooked up lower-cost Android phones with marketing focused on helping the poor, a new study by advocacy group Privacy International found that the privacy trade offs of these devices are... potent. Not only do they usually come with outdated OS' opening the door to hackers, the phones have locked down user control to such a degree they're unable to remove apps that may also pose security risks: "The MYA2 also has apps that can’t be updated or deleted, and those apps contain multiple security and privacy flaws. One of those pre-installed apps that can’t be removed, Facebook Lite, gets default permission to track everywhere you go, upload all your contacts, and read your phone’s calendar. The fact that Facebook Lite can’t be removed is especially worrying because the app suffered a major privacy snafu earlier this year when hundreds of millions of Facebook Lite users had their passwords exposed. Facebook did not respond to request for comment." It's part of a broader issue in telecommunications where privacy has become a luxury available only to those who can afford it. Some telecom giants like AT&T have tried to push the barrier even further, only letting users opt out of online snoopvertising if they're willing to pay $500 more annually for telecom services. Between the apps, the phone, hackers, and your wireless carrier tracking, hacking, and monetizing your every waking moment, it's a privacy and security minefield out there for even affluent smartphone buyers. Studies suggest low income users realize that in the modern telecom landscape there are stark privacy penalties for being poor, yet feel they have no real power in the equation: "Yet millions of Americans who can’t afford to buy a computer or install broadband internet at home often have no choice but to use such devices, which become their sole means of accessing the internet. If they want to enjoy the same basic conveniences that people in higher socioeconomic tiers have—such as transportation directions, online bill pay, and email—they may have to give up their privacy in exchange." The market won't stop the practice because it's profitable to hoover up every shred of data. The government won't stop this process because Congress is slathered with mountains of cross industry campaign contributions that eliminate any motivation to craft meaningful privacy guidelines with any real teeth. With 3.7 billion users expected to have their only online access come via smartphone by 2025, that might just be a problem, and making privacy a "luxury feature" will only make said problem worse. Permalink | Comments | Email This Story

Read More...
posted 3 days ago on techdirt
Whether you work in sales or marketing, you run your own company, or you want to build your own apps, mastering MySQL is crucial to solving complex business problems using insights from data. The Ultimate MySQL Bootcamp introduces you to a solid foundation in databases in a way that’s both informative and engaging. This course is also chock full of exercises, challenges, projects, and opportunities for you to practice what you’re learning. Apply what you’re learning to real-world challenges. It’s a course with ample opportunity for you to get your hands dirty writing code. It's on sale for $12. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted 3 days ago on techdirt
We've written about the "Reid Technique" -- a highly controversial police interrogation technique -- a few times in the past, mainly to criticize it. If you've ever seen a police procedural on TV, you're probably familiar with the technique -- it's the one that verges on a "good cop / bad cop" approach in which a good cop tries to "justify" the crime, telling a suspect all the reasons why it's "understandable" why a person would have committed the crime. Back in 2013, a very thorough New Yorker article covered how the technique was responsible for a ton of false confessions, while also highlighting how the UK and Canada had long moved away from the technique because of the false confessions problem. Last year, one of the leading firms that taught the Reid Technique announced that it would stop teaching the method specifically because of the problems of false confessions and a recognition that "confrontation is not an effective way of getting truthful information." However, the firm that still officially licenses the technique, John E. Reid & Associates, continues to stand by the technique, no matter how many reports of problems come out. And now it's amped that up in the dumbest possible way: by suing Netflix and Ava Duvernay for defamation over her miniseries about the Central Park 5, When They See Us. At issue? In the final episode of the series, a discussion ensues between Manhattan assistant D.A. Nancy Ryan and a New York City detective who was involved in eliciting the confessions of the Central Park Five. During this conversation, Ryan's partner says, "You squeezed statements out of them after 42 hours of questioning and coercing, without food, bathroom breaks, withholding parental supervision. The Reid Technique has been universally rejected. That's truth to you." The lawsuit also claims that the interrogation techniques discussed in When They See Us were not consistent with the actual Reid Technique. There are many, many problems with this lawsuit -- starting with the fact that what's described there is not defamatory. Sure, it may not be true that the technique is not "universally" rejected. There are still straggler police departments that use it. But, at worst, that's rhetorical hyperbole from one character in the series. Second, while this is based on a true story, it's still a depiction of that, and no one is going to take a single statement by a single character in the film as some sort of factual statement. Third, if this case actually does move forward, I can't imagine that John E. Reid & Associates actually want this case to get to discovery -- in which Netflix and the other defendants might seek to establish just how debunked the Reid Technique has become. The 41-page complaint is really quite something. And I don't mean a good something. Much of it goes on and on about all the various things those practicing the Reid Technique are not supposed to do during an interrogation... but... that's meaningless with regards to the question of whether or not the miniseries was defamatory. Even more bizarre? The lawsuit calls out a different famous Netflix series, the popular documentary series Making a Murderer, which includes a bit in its second season, where attorneys point out that questionable interrogation techniques that were used during the (now somewhat infamous) interrogation of Brendan Dassey were not a part of the Reid Technique. So what's that got to do with anything? The filing claims that this is evidence that Netflix "knew" that these kinds of interrogation techniques were not sanctioned as a part of the Reid Technique. Again, it's not clear how that shows anything. And it gets even worse. As the complaint itself admits, after the one character calls out the other for using the Reid Technique, that character retorts: "I don't even know what the fucking Reid Technique is. Okay? I know what I was taught. I know what I was asked to do and I did it." That actually undercuts the very claim of defamation in the case, as it makes it clear, at least, that the character accused of using the Reid Technique didn't even know what the Reid Technique was. So it's bizarre for the company to argue that the show is actually saying he used the Reid Technique. The company insists this doesn't detract from their argument... but it totally does. It makes it pretty clear that this is just a statement made by one character in a portrayal of what happened, and that even the other characters don't all agree with that one character. It is hardly defamatory towards the Reid Technique. And, then, of course, there's the fact that all this lawsuit is really going to accomplish is get that much more attention on the sketchy history of the Reid Technique, the fact that it has resulted in false confessions, and the fact that many, many police departments have abandoned it for that very reason. Of course, as lawyer Andrew Flesichman notes, there may be another approach: The people who came up with this lawsuit need to be sat down in a room for 16 hours without a lawyer just so we can get their side of the [email protected]://t.co/QB9OTNxcQd — Andrew Fleischman (@ASFleischman) October 15, 2019 Permalink | Comments | Email This Story

Read More...