posted 12 days ago on techdirt
Hopefully you will recall FlightSimLabs, the company that makes custom add-ons for computer flight simulation software. FSL made it onto our pages after a Reddit user noticed that every installation of FSL software, including that of a legitimate purchase, installed a file named "test.exe" which was not just a form of DRM, but which also serves as a Chrome password dumping tool, extracting user names and passwords from people's web browsers. Whatever the fuzzy line between DRM software and malware, FLS's installation of its text.exe file clearly leapt over that line with a flourish. The backlash in the Reddit communities and elsewhere was swift and severe, leading Lefteris Kalamaras, who runs FSL, to release the following statement. We have already replaced the installer in question and can only promise you that we will do everything in our power to rectify the issue with those who feel offended, as well as never use any such heavy-handed approach in the future. Once again, we humbly apologize! And that really, really should have been the end of it. If nothing else, the backlash from the community should have informed FSL as to the precise tolerance its customers had for this type of nonsense, which is to say zero. Amazingly, despite Kalamaras' promise, it appears FSL tried to give this DRM thing another try, and somehow managed to make itself look even shittier in the process. Just before the weekend, Reddit user /u/walkday reported finding something unusual in his A320X module, the same module that caused the earlier controversy. “The latest installer of FSLabs’ A320X puts two cmdhost.exe files under ‘system32\’ and ‘SysWOW64\’ of my Windows directory. Despite the name, they don’t open a command-line window,” he reported. “They’re a part of the authentication because, if you remove them, the A320X won’t get loaded. Does someone here know more about cmdhost.exe? Why does FSLabs give them such a deceptive name and put them in the system folders? I hate them for polluting my system folder unless, of course, it is a dll used by different applications.” If you don't have a technical background at all, essentially FSL attempted to deliver DRM again onto users' machines, but named the files to mimic a common Windows background file that users see all the time. It's actually quite common for a user opening Task Manager to see several instances of cmdhost.exe running at once. In other words, it's the kind of thing nearly everyone would scroll past, assuming its legit. As several people on Reddit have pointed out, this sort of misleading naming of software services is a hallmark of malware. “Hiding something named to resemble Window’s “Console Window Host” process in system folders is a huge red flag,” one user wrote. “It’s a malware tactic used to deceive users into thinking the executable is a part of the OS, thus being trusted and not deleted. Really dodgy tactic, don’t trust it and don’t trust them,” opined another. Why FSL seems to get all of its best ideas from the realm of malware is an open question. The company put out a statement explaining that the file is a part of its product activation software and that the file had been vetted by every major antivirus maker out there. Both appear to be true, which doesn't even begin to explain why FSL, having had its reputation so thoroughly tarnished recently, thought pulling this name convention trick with its DRM was a good idea. Reddit users remained on the warpath, causing FSL to really torpedo its reputation even further. In private messages to the moderators of the /r/flightsim sub-Reddit, FSLabs’ Marketing and PR Manager Simon Kelsey suggested that the mods should do something about the thread in question or face possible legal action. “Just a gentle reminder of Reddit’s obligations as a publisher in order to ensure that any libelous content is taken down as soon as you become aware of it,” Kelsey wrote. Noting that FSLabs welcomes “robust fair comment and opinion”, Kelsey gave the following advice. “The ‘cmdhost.exe’ file in question is an entirely above board part of our anti-piracy protection and has been submitted to numerous anti-virus providers in order to verify that it poses no threat. Therefore, ANY suggestion that current or future products pose any threat to users is absolutely false and libelous." The letter concluded with the suggestion of how much FSL would just hate to have to get their lawyers involved if the Reddit moderators left the critical posts up. The mods refused to comply, leading to FSL sending another message to the moderators accusing the critical posts of being defamatory and, if not cleaned up, the company would have "no choice" but to send in the lawyers. Just to be clear, the legal threats here are nonsense. Contrary to the claims in the message, Reddit is not under any "obligation as a publisher" to take down such content, thanks to CDA 230. Oh, and all of that presumes that the original content is, indeed, libelous. Which it is not. The mods again refused, while also accusing FSL of trying to game Reddit's voting system to push down critical posts. “While what you do on your forum is certainly your prerogative, your rules do not extend to Reddit nor the r/flightsim subreddit. Removing content you disagree with is simply not within our purview.” The letter, which is worth reading in full, refutes Kelsey’s claims and also suggests that critics of FSLabs may have been subjected to Reddit vote manipulation and coordinated efforts to discredit them. Once again, responding to internet posts and comments a company doesn't like by trying to censor them, particularly after going through a reputational gauntlet previously, might just be about as dumb as it gets. Between the DRM, the shady installation of software, and the anti-consumer behavior to cover it all up, one wonders what flight simulator mod could possibly be worth engaging with FlightSimLabs ever again. Permalink | Comments | Email This Story

Read More...
posted 12 days ago on techdirt
Reason number a billion why quotas for law enforcement are a bad idea: they encourage the worst behavior. The Victoria (AUS) Police recently performed an internal investigation into breathalyzer tests deployed 17.7 million times over the last 5-½ years. Prompted by an "anomaly" in the data, investigators uncovered something horrific and ridiculous all at the same time: Victorian cops blow… thousands of times a year. Victorian police faked more than a quarter of a million roadside breath tests in what appears to be a deliberate ruse to dupe the system. An internal investigation has found 258,000 alcohol breath tests were falsified over 5½ years, The Age has learned. If there's an upside (and is there?), it's that it did not result in false arrests. These weren't faked tests used to prosecute people for driving under the influence. These were tests "performed" to meet quotas given to officers by supervisors. Never underestimate the reluctance of many workforce members to, you know, actually perform work. Police believe officers may have been blowing into the breathalysers themselves, most likely due to laziness and the need to meet targets. The anomaly first spotted by the Transport Accident Commission was the lack of a credible gap between test results. In most cases, several minutes at the very least would elapse between tests of motorists. Paperwork needs to be filled out, drivers need to be conversed with and/or cited, etc. That gap wasn't present in hundreds of thousands of tests which were performed in batches with no time gap between them. The only explanation? Police snow blow jobs. [T]he faked tests were occurring one after the other. This suggests two things: an officer is either placing a finger over the straw entry hole or they were blowing into the straw themselves. Upside: faked negative tests don't result in false arrests or prosecutions. Downside: everything else. The Victorian Police have proven a quota system doesn't work. The officers have proven they can't be trusted to do their jobs. The latter is at least as significant as the quota issue. If officers are too lazy to hit quotas on breathalyzer tests, what other corners are they cutting while chasing numbers -- whether it's traffic citations or closing investigations? The investigation does prove at least one thing: officers are abusing the trust placed in them, both by their superiors and the general public. The only factor that appeared to deter test fakery was direct oversight. It was not a practice found at supervised drug and alcohol bus testing sites. What will happen to all these lazy officers who abused the trust placed in them? Probably not much of anything. Despite this having been made public, accompanied by statements from police officials confirming the accuracy of the report, government officials further up the ladder -- the oversight -- appears to be withholding judgment until they are "comprehensively briefed." If heads roll, it will hopefully start up top and continue through the rank-and-file. But heads won't start rolling. The culling will probably target the inanimate objects first. The quota system is effectively dead. It will be the scapegoat sacrificed so lazy cops can keep their jobs. It definitely should go, precisely because it encourages this sort of behavior. But it shouldn't be the only thing on the chopping block as the Victorian police seek to bring an end to this unflattering news cycle. Laziness is ingrained behavior and faking breath tests may prove to be the tip of the iceberg. Everything still underwater potentially contains serious civil liberties violations. The sooner the Victorian Police digs into officers' behavior in all areas of their jobs, the sooner it can began regaining the public's trust. Permalink | Comments | Email This Story

Read More...
posted 12 days ago on techdirt
Chicago's gun violence rate -- now in the midst of a long period of decline, never mind what the Attorney General and President say in public statements/tweet -- has been a concern for a few years now. The DOJ, before being chased away from policing the police by Jeff Sessions, noted the PD had destroyed its relationship with city residents with unconstitutional policing and an antagonistic attitude. A couple of high-profile shootings of Chicago residents by police officers did nothing to help. Chicago is the poster child for violent crime, despite its rate of crime being lower than under-the-radar cities like Ft. Worth, Memphis, and Houston. This had led to all sorts of solutions being suggested, including the return of unconstitutional policing (Attorney General), sending in the troops (President Trump), and a sharp uptick in surveillance (the Chicago PD). The New York Times covers the city's surveillance expansion under the headline "Can 30,000 Cameras Help Solve Chicago's Crime Problem?" The answer is unclear, despite the many glowing reviews of the city's camera network delivered by law enforcement officers and officials. The subhed -- "But what does it mean for residents' privacy?" -- is barely discussed. The network Chicago is deploying involves thousands of hi-def cameras, automatic license plate readers, mugshot databases, and predictive policing software. That the system went online roughly about the time homicide numbers began to decline has prompted praise -- perhaps unearned -- for the system's ability to rid the city of its violent crime problem. The department tested the use of technology in two of its most violent areas in early 2017. When crime began to fall, the department ultimately set aside space in 13 of its 22 police stations for the surveillance centers, which tap into the city’s approximately 30,000 government-operated closed-circuit cameras. Inside, civilian crime analysts from the University of Chicago Crime Lab — self-described “nerds” who are often learning data science on the fly — and uniformed officers work side by side at computer terminals, scrutinizing crime data as they search for trends. Much of the technology is similar to equipment used by dozens of police departments around the nation: sensors to detect the location of gunshots, software designed to predict the time and location of crimes and license plate readers that photograph thousands of plates per minute. Cops like the system. It provides a wealth of information, starting with location of reported gunshots and working from there to bring up arrest records of people in the area and vehicle locations of suspects. Fun stuff for cops. Not so much for the thousands of innocent people who live in heavily-surveilled areas. Predictive policing software's track record isn't much better than facial recognition AI. Both have a tendency to generate false positives, but predictive policing allows cops to conjure reasonable suspicion out of ambient temperature, moon stages, and someone's proximity to known criminals. You may think I'm being facetious, but here's the receipt. The civilian analysts spend much of their time feeding a range of information into software called HunchLab, which considers a number of variables — from gang tensions and gunshot reports to the number of parolees living in an area — to forecast crime by giving probability scores, much like a meteorological report. HunchLab also examines less obvious data points, like the location of liquor stores and schools, an area’s proximity to local expressways, and even weather conditions and phases of the moon (there is more crime during full moons; no one knows why). In reality, it seems to do little more than shore up preconceived notions. This is what may have gotten the city into the mess in the first place. And the DOJ's inability to move forward with investigations of police forces means the PD may never have to answer fully for its unconstitutional behavior. What residents are getting instead of better police officers and policies is a massive surveillance network -- one deployed with almost zero public comment or oversight. Some transparency has been put in place, but only after the system has been fully deployed, and it largely consists of invitations to community leaders to tour local "strategic centers" to look at the people looking at screens showing images of their friends and neighbors. As local activist Kofi Ademola puts it, residents weren't asked about the new system. They were simply told this was the way forward. “There was not a conversation like, ‘Do you want this in your community?’ ” he said. “Instead, the Chicago police say, ‘This is in your community and it is going to cut crime,’ and unfortunately, people don’t question that. It’s now been normalized for these communities to be under constant surveillance, which contributes to the criminalization of people. It is problematic.” The situation is unlikely to change. It won't be scaled back, even if current crime rate declines plateau. To its proponents, it's the only explanation for the decline in violence. Efforts have been made to overhaul stop-and-frisk policies, which may have helped community relations, but there's no weight backing it up or codifying the changes. The DOJ isn't going to step in and demand permanent changes and city officials have taken heat from law enforcement reps for the minimal corrective efforts they have managed to put in place. Sure, the system may do some good, but at what societal cost? Will certain residents just assume their lives will be documented and stored in law enforcement databases so long as they live or work in certain neighborhoods? Will guilt by association increase the number of interactions with law enforcement just because they have the misfortune to live in gang territory or a few houses away from recently-released felons? Those are questions that no one can answer with anything but "yes" at this point. The police feel this is an acceptable tradeoff: lower crime for 24-hour surveillance. Those being surveilled have been given no say in the matter. Permalink | Comments | Email This Story

Read More...
posted 12 days ago on techdirt
As we've noted recently, the current copyright reform proposal being considered by the EU is full of extremely dangerous ideas, from mandated filters to a "link tax". This week, we're joined by European Parliament member Julia Reda to talk about the details of the regulatory process and the problems with the current proposal. Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt. Permalink | Comments | Email This Story

Read More...
posted 12 days ago on techdirt
Imagine that you're a new-media entrepreneur in Europe a few centuries back, and you come up with the idea of using moveable type in your printing press to make it easier and cheaper to produce more copies of books. If there are any would-be media critics in Europe taking note of your technological innovation, some will be optimists. The optimists will predict that cheap books will hasten the spread of knowledge and maybe even fuel a Renaissance of intellectual inquiry. They'll predict the rise of newspapers, perhaps, and anticipate increased solidarity of the citizenry thanks to shared information and shared culture. Others will be pessimists—they'll foresee that the cheap spread of printed information will undermine institutions, will lead to doubts about the expertise of secular and religious leaders (who are, after all, better educated and better trained to handle the information that's now finding its way into ordinary people's hands). The pessimists will guess, quite reasonably, that cheap printing will lead to more publication of false information, heretical theories, and disruptive doctrines, which in turn may lead, ultimately, to destructive revolutions and religious schisms. The gloomiest pessimists will see, in cheap printing and later in the cheapness of paper itself—making it possible for all sorts of "fake news" to be spread--the sources of centuries of strife and division. And because the pain of the bad outcomes of cheap books is sharper and more attention-grabbing than contemplation of the long-term benefits of having most of the population know how to read, the gloomiest pessimists will seem to many to possess the more clear-eyed vision of the present and of the future. (Spoiler alert: both the optimists and the pessimists were right.) Fast-forward to the 21st century, and this is just where we're finding ourselves when we look at public discussion and public policy centering on the internet, digital technologies, and social media. Two recent books written in the aftermath of recent revelations about mischievous and malicious exploitation of social-media platforms—especially Facebook and Twitter—exemplify this zeitgeist in different ways. And although both of these books are filled with valuable information and insights, they also yield (in different ways) to the temptation to see social media as the source of more harm than good. Which leaves me wanting very much both to praise what's great in these two books (which I read back-to-back) and to criticize them where I think they've gone too far over to the Dark Side. The first book is Clint Watts's MESSING WITH THE ENEMY: SURVIVING IN A SOCIAL MEDIA WORLD OF HACKERS, TERRORISTS, RUSSIANS, AND FAKE NEWS. Watts is a West Point graduate and former FBI agent who's an expert on today's information warfare, including efforts by state actors (notably Russia) and non-state actors (notably Al Qaeda and ISIS) to exploit social media both to confound enemies and to recruit and inspire allies. I first heard of the book when I attended a conference at Stanford this spring where Watts—who has testified several times on these issues—was a presenter. His presentation was an eye-opening, erasing whatever lingering doubt I might have had about the scope and organization of those who want to use today's social media for malicious or destructive ends. In MESSING WITH THE ENEMY Watts relates in a bracing yet matter-of-fact tone not only his substantive knowledge as a researcher and expert in social-media information warfare but also his first-person experiences in engaging with foreign terrorists active on social-media platforms and in being harassed by terrorists (mostly virtually) for challenging them in public exchanges. "The internet brought people together," Watts writes, "but today social media is tearing everyone apart." He notes the irony of social media's receiving premature and overgenerous credit for democratic movements against various dictatorships but later being exploited as platforms for anti-democratic and terrorist initiatives: "Not long after many across the world applauded Facebook for toppling dictators during the Arab Spring revolutions of 2010 and 2011, it proved to be a propaganda platform and operational communications network for the largest terrorist mobilization in world history, bringing tens of thousands of foreign fighters under the Islamic State's banner in Syria and Iraq." And it wasn't just non-state terrorists who learned quickly how to leverage social-media platforms; an increasingly activist and ambitious Russia, under the direction of Russian President Vladimir Putin, did so as well. Watts argues persuasively that Russia not only assisted and sponsored relatively inexpensive disinformation and propaganda campaigns using the social-media platforms to encourage divisiveness and lack of faith in government institutions (most successfully with the Brexit vote and the 2016 American elections) but also actively supported the hacking of the Democratic National Committee computer network which led to email dumps (using Wikileaks as a cutout). The security breaches, together with "computational propaganda"—social-media "bots" that mimicked real users in spreading disinformation and dissension—played an important role in the U.S. election, Watts writes, helping "the race remain close at times when Trump might have fallen completely out of the running." Even so, Watts doesn't believe Russian propaganda efforts alone would have tilted the outcome of the election—what it did instead was hobble support for Clinton so much that when, when FBI Director James Comey announced, one week before the election, that the Clinton email-server investigation had reopened, the Clinton campaign couldn't recover. "Without the Comey letter," he writes, "I believe Clinton would have won the election." Later in the book he connects the dots more explicitly: "Without the Russian influence effort, I believe Trump would not have been within striking distance of Clinton on Election Day. Russian influence, the Clinton email investigation, and luck brought Trump a victory—all of these forces combined." Where Watts's book focuses on bad actors who exploit the openness of social-media platforms for various malicious ends, Siva Vaidhyanathan's ANTISOCIAL MEDIA: HOW FACEBOOK DISCONNECTS US AND UNDERMINES DEMOCRACY argues that the platforms—and especially the Facebook platform—is inherently corrosive to democracy. (Full disclosure: I went to school with Vaidhyanathan, worked on our student newspaper with him, and I consider him a friend.) Acknowledging his intellectual debt to his mentor, the late social critic Neil Postman, Vaidhyanathan blames the negative impacts of various exploitations of Facebook and other platforms on the platforms themselves. Postman was a committed technopessimist, and Vaidhyanathan takes time to chart in ANTISOCIAL MEDIA how Postman's general skepticism about new information technologies ultimately led his younger colleague to temper his originally optimistic view of the internet and digital technologies generally. If you read Vaidhyanathan's work over time, you find in his writing a progressively darker view of the internet and its ongoing evolution, taking a significantly more pessimistic turn around the time of his 2011 book, THE GOOGLIZATION OF EVERYTHING (AND WHY WE SHOULD WORRY). In his earlier book, Vaidhyanathan took pains to be as fair-minded as he could in raising questions about Google and whether it can or should be trusted to play such an outsized role in our culture as the mediator of so much of our informational resources. He was skeptical (not unreasonably) about whether Google's confidence in both its own good intentions and its own expertise is sufficient reason to trust the company—not least because a powerful company can stay around as a gatekeeper for the internet long past the time its well-intentioned founders depart or retire. With ANTISOCIAL MEDIA, Vaidhyanathan cuts Mark Zuckerberg (and his COO, Sheryl Sandberg) rather less of a break. Facebook's leadership, as I read Vaidhyanathan's take, is both more arrogant than Google's and more heedless of the consequences of its commitment to connect everyone in the world through the platform. Synthesizing a full range of recent critiques of Facebook's design as a platform, he relentlessly characterizes Facebook as driving us to shallow, reactive reactions to one another rather than promoting reflective discourse that might improve or promote our shared values. Facebook, in his view, distracts us instead of inspiring us to think. It's addictive for us in something like the same way gambling or potato chips can be addictive for us. Facebook privileges the visual (photographs, images, GIFs, and the like), he insists, over the verbal and discursive. And of course even the verbal content is either filter-bubbly—as when we convene in private Facebook groups to share, say, our unhappiness about current politics—or divisive (so that we share and intensify our outrage about other people's bad behavior, maybe including screenshots of something awful someone has said elsewhere on Facebook or on Twitter). Vaidhyanathan suggests that at one point our political discourse as ordinary citizens was more rational and reflective, but now is more emotion- and rage-driven and divisive. Me, I think the emotionalism and rage was always there. Even when Vaidhyanathan allows that there may be something positive about one's interactions on Facebook, he can't quite help himself from being reductive and dismissive about it: "Nor is Facebook bad for everyone all the time. In fact, it's benefited millions individually. Facebook has also allowed people to find support and community despite being shunned by friends and family or being geographically isolated. Facebook is still our chief source of cute baby and puppy photos. Babies and puppies are among the things that make life worth living. We could all use more images of cuteness and sweetness to get us through our days. On Facebook babies and puppies run in the same column as serious personal appeals for financial help with medical care, advertisements for and against political candidates, bogus claims against science, and appeals to racism and violence." In other words, Facebook may occasionally make us feel good for the right reasons (babies and puppies) but that's about the best most people can hope for from the platform. Vaidhyanathan has a particular antipathy towards Candy Crush, which you can connect to your Facebook account—a video game that certainly seems vacuous, but also seems innocuous to me. (I've never played it myself.) Given his antipathy towards Facebook, you might think that Vaidhyanathan's book is just another reworking of the moral-panic tomes that we've seen a lot of in the last year or two, which decry the internet and social media much the same way previous generations of would-be social critics complained about television, or the movies, or rock music, or comic books. (Hi, Jonathan Taplin! Hi, Franklin Foer!) But that's a mistake, primarily because Vaidhyanathan digs deep into choices—some technical and some policy-driven—that Facebook has made that facilitated bad actors' using the platform maliciously and destructively. Plus, Vaidhyanathan, to his credit, gives attention to how oppressive governments have learned to use the platform to stifle dissent and mute political opposition. (Watts notes this as well.) I was particularly pleased to see his calling out how Facebook is used in India, in the Philippines, and in Cambodia—all countries where I've been privileged to work directly with pro-democracy NGOs. What I find particularly valuable is Vaidhyanathan's exploration of Facebook's advertising policies and their effect on political ads—I learned plenty from ANTISOCIAL MEDIA about the company's "Custom Audiences from Customer Lists," including this disturbing bit: "Facebook's Custom Audiences from Customer Lists also gives campaigns an additional power. By entering email addresses of those unlikely to support a candidate or those likely to support an opponent, a campaign can narrowly target groups as small as twenty people and dissuade them from voting at all. 'We have three major voter suppression operations under way,' a campaign official told Bloomberg News just weeks before the election. The campaign was working to convince white leftists and liberals who had supported socialist Bernie Sanders in his primary bid against Clinton, young women, and African American voters not to go to the polls on election day. The campaign carefully targeted messages on Facebook to each of these groups. Clinton's former support for international trade agreements would raise doubts among leftists. Her husband's documented affairs with other women might soften support for Clinton among young women...." What one saw in Facebook's deployment of the Custom Audiences feature is something fundamentally new and disturbing: "Custom Audiences is a powerful tool that was not available to President Barack Obama and Governor Mitt Romney when they ran for president in 2012. It was developed in 2014 to help Facebook reach the takeoff point in profits and revenue. Because Facebook develops advertising tools for firms that sell shoes and cosmetics and only later invites political campaigns to use them, 'they never worried about the worst-case abuse of this capability, unaccountable, unreviewable political ads,' said Professor David Carroll of the Parsons School of Design. Such ads are created on a massive scale, targeted at groups as small as twenty, and disappear, so they are never examined or debated." Vaidhyanathan quite properly criticizes Mark Zuckerberg's late-to-the-party recognition that perhaps Facebook may much more of a home to divisiveness and political mischief (and general unhappiness) than he previously had been willing to admit. And he's right to say that some of Zuckerberg's framing of new design directions for Facebook may be as likely to cause harm (e.g., more self-isolation in filter bubbles) than good. "The existence of hundreds of Facebook groups devoted to convincing others that the earth is flat should have raised some doubt among Facebook's leaders that empowering groups might not enhance the information ecosystem of Facebook," he writes. "Groups are as likely to divide us and make us dumber as any other aspect of Facebook." But here I have to take issue with my friend Siva, because he overlooks or dismisses the possibility that Facebook's increasing support for "groups" of like-minded users may ultimately add up to a net social positive. For example, the #metoo groups seem to have enabled more women (and men) to come forward and talk frankly about their experiences with sexual assault and to begin to hold perpetrators of sexual assault and sexual harassment accountable. The fact that some folks also use Facebook groups for more frivolous or wrongheaded reasons (like promoting flat-earthism) strikes me as comparatively inconsequential. Vaidhyanathan's also too quick, it seems to me, to dismiss the potential for Facebook and other platforms to facilitate political and social reform in transitional democracies and developing countries. Yes, bad governments can use social media to promote support for their regimes, and I don't think it's particularly remarkable that oppressive governments (or non-state actors like ISIS) learn to use new communications media maliciously. Governments may frequently be slow, but they're not invariably stupid—so it's no big surprise, for example that Cambodian prime minister Hun Sen has figured out how to use his Facebook page to drum up support for his one-party rule, which has driven out opposition press and the opposition Cambodia National Rescue Party. But Vaidhyanathan overlooks how some activists are using Facebook's private groups to organize reform or opposition activities. In researching this review, I reached out to friends and colleagues in Cambodia, the Philippines and elsewhere to confirm whether the platform is useful to them—certainly they're cautious about what they say in public on Facebook, but they definitely use private groups for some organizational purposes. What makes the platform useful to activists is that it's accessible, easy to use, and amenable to posting multimedia sources (like pictures and videos of police and soldiers acting brutally towards protestors). And it's not just images--when I worked with activists in Cambodia on developing a citizen-rights framework as a response to their government's abrupt initiation of "cybercrime" legislation (really an effort to suppress dissenting speech), I suggested they work collaboratively in the MediaWiki software that Wikipedia's editors use. But the Cambodian activists quickly discovered that Facebook was an easier platform for technically less proficient users to learn quickly and use to review draft texts together. I was surprised at this, but also encouraged. Even though I had my own doubts whether Facebook was the right tool for the job, I figured they didn't need yet another American trying to tell them how to manage their own collaborations. Like Watts's book, Vaidhyanathan's is strongest where it's built on independent research that doesn't merely echo what other critics have said. And both books are weakest when they uncritically import notions like Eli Pariser's "filter bubble" hypothesis or the social-media-makes-us-depressed hypothesis. (Both these notions are echoes of previous moral panics about previous new media, including broadcasting in the 20th century and cheap paper in the 19th. And both have been challenged by researchers.) Vaidhyanathan's so certain of the meme that Facebook's Free Basics program is an assault on network neutrality that he mostly doesn't investigate the program itself in any detail. The result is that his book (to this reader, anyway) seems to conflate Free Basics (a collection of low-bandwidth resources that Facebook provided a zero-rated platform for) with Facebook Zero (a zero-rated low-bandwidth version of Facebook by itself). In contrast, the Wikipedia articles on Free Basics and Facebook Zero lead off with warnings not to confuse the two. In addition to the strengths and weaknesses the two books share, they also have a certain rhetorical approach in common—largely, in my view, because both authors want to push for reform, and because they want to challenge with the sunny-yet-unwarranted optimism with which Zuckerberg and Sandberg and other boosters have characterized social media. In effect, both authors seem to take the approach that, as we learn to be much more critical of social-media platforms, we don't need to worry about throwing out the baby with the bathwater—because, really, there is no baby. (If we bail on Facebook altogether, it's only the frequent baby pictures that we'd lose.) Even so, both books also share an unwillingness to call for simple opposition to Facebook and other social-media platforms merely because they're misused. Watts argues persuasively instead for more coherent and effective positive messaging about American politics and culture—of the sort that used to be the province of the United States Information Agency. (I think he'd be happy if the USIA were revived; I would be too.) He also calls for an "equivalent of Consumer Reports" to "be created for social media feeds," which also strikes me as a fine idea. Vaidhyanathan's reform agenda is less optimistic. For one thing, he's dismissive of "media literacy" as a solution because he doubts "we could even agree on what that term means and that there would be some way to train nearly two billion people to distinguish good from bad content." He has some near-term suggestions—for example, he'd like to see an antitrust-type initiative to break up Facebook, although it's unclear to me whether multiple competing Facebooks or a disassembled Facebook would be less hospitable to the kind of shallowness and abuses he sees in the platform's current incarnation. But mostly he calls for a kind of cultural shift driven by social critics and researchers like himself: "This will be a long process. Those concerned about the degradation of public discourse and the erosion of trust in experts and institutions will have to mount a campaign to challenge the dominant techno-fundamentalist myth. The long, slow process of changing minds, cultures, and ideologies never yields results in the short term. It sometimes yields results over decades or centuries." I agree that it frequently takes decades or even longer to truly assess how new media affect our culture for good or for ill. But as long as we're contemplating all those years of effort, I see no reason not to put media literacy on the agenda as well. I think there's plenty of evidence that people can learn to read what they see on the internet critically and do better than simply cherry-pick sources that agree with them—a vice that, it must be said, predates social media and the internet itself. The result of increasing skepticism about media platforms and the information we find in them may also lead (as Watts warns us) to more distrust of "experts" and "expertise," with the result that true expertise is more likely to be unfairly and unwisely devalued. But my own view is that skepticism and critical thinking—even about experts with expertise—is generally positive. For example, it may be annoying to today's physicians that patients increasingly resort to the internet about their real or imagined health problems—but engaged patients, even if they have to be walked back from foolish ideas again and again, are probably better off than the more passive health-care consumers of previous generations. I think Vaidhyanathan is right, ultimately, to urge that we continue to think about social media critically and skeptically, over decades—and, you know, forever. But I think Watts offers the best near-term tactical solution: "On social media, the most effective way to challenge a troll comes from a method that's taught in intelligence analysis. To sharpen an analyst's skills and judgment, a supervisor or instructor will ask the subordinate two questions when he or she provides an assessment: 'What do those who disagree with your assessment think, and why?' The analyst must articulate a competing viewpoint. The second question is even more important: 'Under what conditions, specifically, would your assessment be wrong?' [...] When I get a troll on Facebook, I'll inquire, 'Under what circumstance would you admit you were wrong?' or 'What evidence would convince you otherwise?" If they don't answer or can't articulate their answer, then I disregard them on that topic indefinitely." Watts's heuristic strikes me as the perfect first entry in the syllabus for media literacy in particular and for criticism of social media in general. In sum, I think both MESSING WITH THE ENEMY and ANTISOCIAL MEDIA deserve to be on every internet-focused policymaker's must-read list this season. I also think it's best that readers honor these books by reading them with the same clear-eyed skepticism that their authors preach. Mike Godwin (@sfmnemonic) is a Distinguished Senior Fellow at R Street Institute. Permalink | Comments | Email This Story

Read More...
posted 12 days ago on techdirt
You might remember that when HBO comedian John Oliver originally tackled net neutrality on his show in 2014, the FCC website crashed under the load of concerned consumers eager to support the creation of net neutrality rules. When Oliver revisited the topic last May to discuss Trump FCC boss Ajit Pai's myopic plan to kill those same rules, the FCC website crashed under the load a second time. That's not a particular shock; the FCC's website has long been seen as an outdated relic from the wayback times of Netscape, hit counters, and awful MIDI music. But then something weird happened. In the midst of all the media attention Oliver was receiving for his segment, the FCC issued a statement (pdf) by former FCC Chief Information Officer David Bray, claiming that comprehensive FCC "analysis" indicated that it was a malicious DDoS attack, not angry net neutrality supporters, that brought the agency's website to its knees: "Beginning on Sunday night at midnight, our analysis reveals that the FCC was subject to multiple distributed denial-of-service attacks (DDos). These were deliberate attempts by external actors to bombard the FCC’s comment system with a high amount of traffic to our commercial cloud host. These actors were not attempting to file comments themselves; rather they made it difficult for legitimate commenters to access and file with the FCC." But the FCC's claims were seen as suspect by numerous security experts, who say the crash showed none of the usual telltale signs of an actual DDOS. And reports subsequently emerged indicating that the "analysis" the FCC supposedly conducted never actually occurred. When media outlets began noticing that something fishy was going on, the Trump FCC issued a punchy statement accusing the media of being "completely irresponsible." No evidence was ever provided to journalists or lawmakers that pressured the agency for hard data proving the claims. Fast forward to this week, and new internal FCC e-mails obtained via FOIA request show that yes, the FCC did routinely try to mislead the public and the press with repeated claims of DDOS attacks that never actually happened: "The FCC has been unwilling or unable to produce any evidence an attack occurred—not to the reporters who’ve requested and even sued over it, and not to U.S. lawmakers who’ve demanded to see it. Instead, the agency conducted a quiet campaign to bolster its cyberattack story with the aid of friendly and easily duped reporters, chiefly by spreading word of an earlier cyberattack that its own security staff say never happened." The story is worth a read, and highlights how former FCC CIO David Bray and FCC media relations head Mark Wigfield repeatedly fed false information about the nonexistent attack to reporters, then used those (incorrect) stories to further prop up their flimsy claims about the DDOS: "Bray is not the only FCC official last year to push dubious accounts to reporters. Mark Wigfield, the FCC’s deputy director of media relations, told Politico: “there were similar DDoS attacks back in 2014 right after the Jon Oliver [sic] episode.” According to emails between Bray and FedScoop, the FCC’s Office of Media Relations likewise fed cooked-up details about an unverified cyberattack to the Wall Street Journal. The Journal apparently swallowed the FCC’s revised history of the incident, reporting that the agency “also revealed that the 2014 show had been followed by DDoS attacks too,” as if it were a fact that had been concealed for several years. After it was published, the Journal’s article, authored by tech reporter John McKinnon, was forwarded by Bray to reporters at other outlets and portrayed as a factual telling of events. Bray also emailed the story to several private citizens who had contacted the FCC with questions and concerns about the comment system’s issues." The story isn't going to get much mainstream traction thanks to numerous other instances of cultural idiocy we're all currently soaking in, but it's fairly amazing all the same. In short, the FCC appears to have completely concocted a fake DDOS attack in a ham-fisted effort to try and downplay the massive public opposition to its extremely-unpopular policies. Of course that's pretty standard behavior for an agency that also blocked a law enforcement inquiry into fraud during the public comment period, likely also an effort to downplay massive public opposition to the repeal. This isn't likely to be the end of this story, and it's something that's likely to surface in the looming lawsuits against the FCC for its extremely unpopular repeal. Permalink | Comments | Email This Story

Read More...
posted 12 days ago on techdirt
Go from beginner to expert with lifetime access to eLearnExcel The Microsoft Excel Master Certification Bundle. Over 9 courses you'll learn all about pivot tables, time saving tricks, macros, formulas and more. Each course comes with a certificate of completion. The bundle is on sale for $39. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted 12 days ago on techdirt
Having tried and mostly failed to regulate Wild West internet commerce, legislators have now decided to take a more "hands off" approach to the intersection of communications and commerce. That's what I would be writing if we lived in a world where people learned from their mistakes. But they don't. Whatever has failed a half-dozen times in previous iterations can be rebooted, doubled-down on, and otherwise presented as a legislative solution for a "problem." And this "problem" is always the same. Incumbents who have somehow managed to parlay their fortunes into a "disadvantaged" position want tech companies to give them (or their government) money. Link taxes -- otherwise known as "Google taxes" -- supposedly would allow publishers to recoup their "losses" from having Google send traffic their way. These haven't worked, and in the worst case scenario, Google has simply shut down its Google News service rather than pay for the privilege of referring traffic. Other attempts to make things "fair" for brick-and-mortar businesses competing with Amazon have led to similar outcomes. In one case, the French government decided Amazon could no longer offer free shipping on books to France. Amazon obliged, raising shipping to $0.01 Euros. The Australian government has decided to go down the road well traveled and charge Amazon extra for beating local retailers at their own game. A new law goes into effect at the beginning of July which charges Amazon 10% tax for every imported good sold to Australians. One of the backers of the bill is retailer Harvey Norman, which had this to say about Amazon. "They think they have the right to pay no tax in Australia," Harvey Norman's executive chairman Gerry Harvey said on Thursday of Amazon's decision to "blacklist" the country. “They’ve done the dirty on the government. They’ve done the dirty on the public.” And what is this "dirty" Amazon is accused of doing? Nothing more than deciding the beneficiaries of Australian tax dollars -- mainly Australians and Australian merchants -- should pay the 10% tax. In response to the new law, Australians have been cut off from Amazon's main feed. Amazon said that Amazon.com, its American website, and other overseas sites would no longer ship to Australian addresses from July 1. Shoppers visiting those sites will be redirected to Amazon.com.au, which launched late last year and stocks about 60 million products, compared to almost half a billion on its US site. There you go. The "playing field" has been levelled, as proponents like Harvey Norman requested. Local retailers will now only compete with Amazon's local site. How much "fairer" could it get? And yet, they're complaining that the level playing field is also "doing the dirty." Australians aren't happy about this. The limited selection they'll be forced to purchase from doesn't give them nearly as many options as Amazon's US site. The playing field is so level all Australians will be frustrated equally with their inability to source obscure goods and/or the hassle of using reshippers to get products local retailers don't carry. "Australians are very isolated and it’s the likes of Amazon that have enabled consumers to have more variety, said Darren Price, a Sydney-based tech writer. "Otherwise you end up waiting for whenever Harvey Norman is going to get it in stock." Mr Price - who spends about $500 a year on Amazon.com, mostly for computer components that aren’t available locally - said he and many other Australians would likely get around the blockade by using package redirection services, which receive orders shipped to addresses in the US and then forward them to Australia. In their desperation to punish Amazon for being successful, retailers like Harvey Norman have only managed to piss off customers already unhappy with their lack of selection. This mistake was compounded by the Australian government, which decided that tagging Amazon with the tax (rather than shippers or purchasers) would "cause the least disruption to consumers." I guess no one thought maybe a discussion with consumers might help define what is or isn't disrupting. Punishing one large company to help another large company really doesn't do anything but hurt consumers. Permalink | Comments | Email This Story

Read More...
posted 12 days ago on techdirt
When Google Fiber first arrived back in 2010, it was lauded as a game changer for the broadband industry. Google Fiber would, we were told, revolutionize the industry by taking Silicon Valley money and using it to disrupt the viciously uncompetitive and anti-competitive telecom sector. Initially things worked out well; cities tripped over themselves offering all manner of perks to the company in the hopes of breaking free from the broadband duopoly logjam. And in some areas where Google Fiber was deployed, prices certainly dropped thanks to Google Fiber market pressure. But that was then, and this is now. In late 2016 Alphabet made it clear that the company had grown bored with the high costs and slow pace of deploying fiber. The project has burned through several CEOs in just a year, laid off numerous employees, and the company ultimately announced it was considering a pivot to cheaper wireless technology. The problem: Google's still conducting numerous tests in various spectrum bands (including millimeter wave), but doesn't actually know what this replacement tech looks like yet. Meanwhile, the cities once promised a broadband revolution are seeing that hope replaced with annoyance and frustration. While the company stated it would be putting any new builds on hold, it insisted that existing projects that were underway wouldn't be impacted. That hasn't proven to be the case, with users in initial launch markets like Kansas City saying their installations had been cancelled with no real explanation after years of waiting. That same song is also playing out in markets like Atlanta, where hope and excitement have shifted to something decidedly... different: "It’s been more than three years since the Google Fiber frenzy took hold of the Atlanta area. From Alpharetta to Avondale Estates, Sandy Springs to Smyrna, folks fed up with chronically unreliable internet connections, abysmal customer service and expensive monthly bills lapped up Google Fiber’s promise....Google has released little public information about the Atlanta rollout delays, and company officials declined WABE’s multiple requests for an interview on the status of the project and other specifics. Noting a trend yet? You'll notice the same complaints in Austin, one of Google Fiber's more robust builds, where locals point out that progress appears to have stopped for many users who say the technology was installed, but progress just magically ceased: "Construction is complete. Equipment is installed. But a year later, a south Austin neighborhood says they're still waiting on Google Fiber to actually work...Today, some residents say they can't get a straight answer on what's taking so long to access the high-speed internet... Susan Speyer says when she was signing up for Google Fiber, she was told she'd have service in, "Just a few weeks to max three months." And as the months passed, cable and internet bills with other providers, they say, have gone up. Neighbor Sherry Lowry adding, "It's doubled since all of this started with Google." To be fair, Google's PR folks can't offer answers of what comes next because Google itself doesn't know what the wireless technology that will supplant fiber will look like. But even Google's wireless promises have been decidedly shaky. After acquiring urban wireless provider Webpass two years ago, some of that company's coverage markets have actually shrunk, with the provider simply pulling out of cities like Boston without much explanation. And many of the executives that were part of that acquisition have "suddenly" departed for unspecified reasons. At this point it's certainly possible that once Google Fiber is done with its multi-year, numerous wireless tests it settles on a cheaper (but still expensive and time consuming) alternative to fiber. But as the company's newfound apathy and steady retreat from net neutrality advocacy makes clear, this isn't the same company Alphabet/Google was when this experiment started, and it remains entirely possible the entire project is scuttled or sold off as Google itself inevitably shifts from innovation and disruption to turf protection (especially with ISPs like Comcast and AT&T pushing harder into advertising). Meanwhile, the broadband sector is actually getting less competitive than ever as the nation's telcos give up on upgrading aging DSL lines, leaving the nation's cable providers with greater regional monopolies than ever before. The fact that nobody wants to upgrade this nation's already mediocre broadband infrastructure (because it's not profitable enough, quickly enough for Wall Street) is a major reason more and more towns and cities are simply building their own broadband networks -- assuming states haven't banned them from doing so at large ISP behest. Based on what we're seeing lately, those hoping that Google had the money, resources and willpower to shake the broadband sector out of its monopoly dysfunction shouldn't hold their breath. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
As Techdirt readers know, there is a ratchet effect that means copyright always gets longer and stronger. As well as being inherently unfair -- why must the public always lose out when copyright law is changed? -- there's another unfortunate consequence. If the term or breadth of copyright were reduced from time to time, we would be able to test the effects of doing so on things like creativity. For example, if it turned out that shortening copyright increased the number of works being produced, then there would be a strong argument for reducing it further in the hope that the effect would be strengthened. The fact that we have been unable test this hypothesis is rather convenient for copyright maximalists. It means they can continue to call for the term of copyright to be increased without having to address the argument that this will cause less creativity, or reduce access to older works. Even though it is not possible to test the effects of reduced copyright directly, two US academics, Barbara Biasi and Petra Moser, have spotted a clever way of investigating the idea indirectly, in the field of science publishing. As they write in a post on CEPR's policy portal, in 1942 the US Book Republication Program (BRP) allowed US publishers to reprint exact copies of German-owned science books, effectively abolishing copyright for that class of works. They have looked at what impact this dramatic change had on the use of those reprinted works by scientists. Comparing citation rates before and after the BRP was introduced is not enough on its own: citation rates fluctuate, so it is necessary to compare the BRP citation rate with something else. The researchers' solution is to look at the citation rate of Swiss books from the same time: This approach addresses the issue that English-language citations may have increased mechanically after 1942, if English-language scientists published more after the war. Like German scientists, Swiss scientists were leaders in chemistry and mathematics and wrote primarily in German, but due to Switzerland's neutrality, Swiss-owned copyrights were not accessible to the BRP. [Office of Library Services] estimates of a matched sample of BRP and Swiss books (in similar fields and with similar levels of pre-BRP non-English citations) confirm the significant increase in citations in response to the BRP. Specifically, there was a 67% increase in citations of BRP books compared to similar Swiss books. The research suggests this was driven largely by the 25% drop in average prices seen after the BRP scheme was introduced. The reduction in price seems to have allowed a wider range of US libraries to purchase the more affordable BRP texts, whereas Swiss books remained concentrated in the holdings of two wealthy research libraries (Yale and Chicago). Better access was correlated with more citations: the data shows that the latter increased most near the locations of BRP libraries. The researchers conclude: In the context of contemporary debates, our findings imply that policies which strengthen copyrights, such as extensions in copyright length, can create enormous welfare costs by discouraging follow-on science, especially among less affluent institutions and scientists. Critics might point out that this is just one study of one rather specific area. But that's an argument for reducing copyright terms, perhaps on a trial basis, to see whether the results of this research are confirmed. However, the copyright ratchet will never allow that, not least because the companies involved probably know it would confirm that constantly strengthening copyright is bad for everyone except themselves. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+ Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
Perhaps you thought that the legal drama between the famous San Diego Comic-Con and the Salt Lake Comic Con was over. Our ongoing coverage of this trademark dispute stemming from SDCC somehow having a valid trademark on "comic-con", a shortened descriptor phrase for a comic convention, largely concluded when SDCC "won" in court, being awarded $20,000 after initially asking for $12 million in damages. With the focus now turning to the roughly gazillion other comic conventions that exist using the "comic-con" phrase in their names and marketing materials, this particular dispute seemed to have come to a close. But not so much, actually. In post-trial motions, SDCC petitioned Judge Battaglia to consider the case "exceptional" so that SDCC can recover attorney's fees from SLCC. The arguement for SDCC appears to mostly be that they spent a shit-ton of money on attorneys for the case. U.S. District Judge Anthony Battaglia heard a host of posttrial motions Thursday, including San Diego Comic-Con’s request for over $4.5 million in attorney fees which have already been paid in full. San Diego Comic-Con attorney Callie Bjurstrom with Pillsbury Law told Battaglia Thursday he should find the case is “exceptional” so that attorney fees and costs can be awarded. “This was a very expensive case; the reason this case was so expensive was because of defendants and their counsel and the way they litigated this case,” Bjurstrom said. It will be interesting to see how Judge Battaglia rules on the assertion that SLCC's defense of itself warrants its paying SDCC's attorney's fees. What exactly was SLCC supposed to do, not try to defend itself in the best way possible? One also wonders if SDCC would be petitioning for attorney's fees had the jury found that SLCC's infringement was not willful, resulting in the paltry $20k award. Perhaps, perhaps not. What this sure looks like is the SDCC realizing that this "win" came at the cost of a hilariously large amount of money and it is attempting to mitigate that loss. SDCC also petitioned the court to bar SLCC from using its trademarks. That sort of thing would be par for the course except for two things. First, again, this trademark is ridiculous. It's purely descriptive. Second, hammering home that fact, SDCC doesn't want SLCC to even be able to properly describe the type of event it is. But San Diego Comic-Con’s request went a step further than simply asking Battaglia to enjoin the Salt Lake convention operators from infringing its trademarks: it asked the judge to bar the Salt Lake convention from using the words “comic convention” or phonetic equivalents to “Comic Con” or “comic convention.” That request should lay plain how dumb this all is. If a comic convention cannot refer to itself as such because that is too close to the trademark "comic-con", then it should be plain as day that "comic-con" is purely descriptive and, therefore, invalid as a trademark. I wouldn't be surprised to see this petition to the court turn up at the USPTO in a bid to cancel SDCC's trademark entirely. That's certainly what I would be doing if I were heading up any of the hundreds of comic cons out there. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
Government agencies, for the most part, treat public records requesters as weeds in the garden of governance: a pest that can never be fully eradicated, but rather tolerated with as much annoyance as possible. Whatever can't be made to disappear with hefty fee demands or months of stonewalling will be given as little attention and compliance as possible. This attitude has turned FOIA requesters into frequent litigators seeking to hold one branch of the government accountable by using another. When Cheryl Brantley, a member of activist group A Better Way for BPA, requested records from the Bonneville Power Administration (run by the Department of Energy), she filled out the agency's online FOIA form and waited. And waited. And waited some more before finally suing. BPA responded by declaring A Better Way had no standing to file a lawsuit. It decided to get hypertechnical about Brantley's FOIA submission, claiming no one but Brantley herself should be allowed to sue. The district court granted the BPA's motion to dismiss for lack of standing. A Better Way appealed this decision, placing it before the Ninth Circuit Court of Appeals. The court is completely unimpressed with the BPA's attempt to turn a meaningless technicality into a motion to dismiss. From the decision's [PDF] summary: [h/t Brad Heath] The government challenged the group’s standing and the district court dismissed the suit, saying that the submitted form did not adequately identify the organization as the requester. We disagree. FOIA forms should not be a “gotcha” proposition requiring a lexicographer to discern who made the request. Brantley's request identified two parties as requesters: herself and A Better Way. Further down in the form, she clicked a box designating this as a request by an "individual" for personal use. Later comments on the same form indicated Brantley was requesting this on behalf of A Better Way, referring to "technical advisers" who would help disseminate info obtained "to our members." Even if the BPA wanted to get technical about Brantley's choice of "individual" on the online form, its own communications with Brantley made it clear the agency felt it was dealing with a group, rather than Brantley herself. [O]n February 18, 2015, the agency sent a letter addressed to “Cheryl Brantley[,] A Better Way for BPA,” stating that BPA had been in touch with [David] Bricklin [A Better Way's attorney], granting a fee waiver, noting the complexity of the request, and estimating completion by September 30, 2015. On September 28, 2015, BPA sent another letter, addressed the same way, advising of its need to submit certain records to third-party entities for review and thus “extending the target date for BPA’s response to your request to March 31, 2016.” The agency continued to communicate with A Better Way’s counsel. Significantly, on November 13, 2015, BPA sent an email to Bricklin with the subject line: “BPA-2015- 00597-F-Brantley (A Better Way for BPA) - DEIS for I-5 Corridor Reinforcement Project - 5 U.S.C. § 552(b)(4) determination letters.” Two days later, BPA sent another email to Bricklin with a similar subject line: “BPA-2015- 00597-F-Brantley (A Better Way for BPA) - DEIS for I-5 Corridor Reinforcement Project - communication with the requester’s counsel.” The appeals court makes short work of BPA's attempt to dodge litigation predicated on its own failure to produce responsive documents. Viewing the form as a whole, it is clear that the request was made on behalf of A Better Way, that the request was not for commercial purposes, that there was an obvious public interest related to BPA’s I-5 Corridor Reinforcement Project, and that the requester had “members,” hardly a characteristic of an individual requester. Any confusion in the electronic form was of BPA’s own making and could easily be fixed by including a place to check that the request is made “on behalf of” an organization or by adding “public interest organization” or “other” options under Type of Requester. The court goes on to note that BPA certainly knew the group requesting documents and acknowledged -- through multiple communications with A Better Way's counsel -- that A Better Way was the ultimate recipient of the sought documents. Pretending otherwise is just conveniently disingenuous. To the extent ambiguity exists with how Brantley filled out the form—and we do not think that any does—the follow-on correspondence between BPA and the requester affirms that A Better Way was the requester and that BPA treated A Better Way as the requester. For example, BPA addressed letters to “Cheryl Brantley[,] A Better Way for BPA,” twice placed “A Better Way” in the subject line of emails concerning the request, and regularly communicated with the organization’s lawyer. This treatment was unsurprising, as A Better Way and BPA were hardly strangers. During a six-month period from December 2009 to June 2010, for instance, the organization submitted ten FOIA requests to the agency. BPA cannot reverse course now and convince us that the organization with whom it was regularly corresponding and which it acknowledged as the requester should be out of court. Finally, the court says common sense must be used when dealing with FOIA requesters, who are usually citizens untrained in the art of obfuscatory bureaucracy. The court points out BPA, on multiple occasions, made it clear through communications with the group's counsel that it knew it was dealing with A Better Way, not Cheryl Brantley acting of her own accord. The case is sent back to the lower court to allow A Better Way to continue suing BPA for records it still hasn't turned over. And hopefully the BPA will act in good faith in the future when being sued for unresponsiveness. But probably not. After all, it's not the agency's money being wasted. That comes from US taxpayers -- an apparently bottomless source of revenue. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
The Copia Institute was not the only party to file an amicus brief in support of Airbnb and Homeaway's Ninth Circuit appeal of a district court decision denying them Section 230 protection. For instance, a number of Internet platforms, including those like Glassdoor, which hosts specialized user expression, and those like eBay, which hosts transactional user expression, filed one pointing out how a ruling denying Airbnb and Homeaway would effectively deny it to far more platforms hosting far more kinds of user speech than just those platforms behind the instant appeal. And then there was this brief, submitted on behalf of former Congressman Chris Cox, who, with then-Representative Ron Wyden, had been instrumental in getting Section 230 on the books in the first place. With this brief the Court does not need to guess whether Congress intended for Section 230 to apply to platforms like Airbnb and Homeaway; the statute's author confirms that it did, and why. In giving insight into the statutory history of Section 230 the brief addresses the two main issues raised by the Airbnb appeal – issues that are continuing to come up over and over again in Section 230-related litigation in state and federal courts all over the country: does Section 230 apply to platforms intermediating transactional user expression, and does Section 230's pre-emption language preclude efforts by state and local authorities to hold these platforms liable for intermediating the consummation of the transactional speech. Cox's brief describes how Congress intended both these questions to be answered in the affirmative and thus may be relevant to these other cases. With that in mind, we are archiving – and summarizing – the brief here. To illustrate why Section 230 should apply in these situations, first the brief explains the historical context that prompted the statute in the first place: In 1995, on a flight from California to Washington, DC during a regular session of Congress, Representative Cox read a Wall Street Journal article about a New York Superior Court case that troubled him deeply. The case involved a bulletin board post on the Prodigy web service by an unknown user. The post said disparaging things about an investment bank. The bank filed suit for libel but couldn’t locate the individual who wrote the post. So instead, the bank sought damages from Prodigy, the site that hosted the bulletin board. [page 3] The Stratton Oakmont v. Prodigy decision alarmed Cox for several reasons. One, it represented a worrying change in judicial attitudes towards third party liability: Up until then, the courts had not permitted such claims for third party liability. In 1991, a federal district court in New York held that CompuServe was not liable in circumstances like the Prodigy case. The court reasoned that CompuServe “ha[d] no opportunity to review [the] contents” of the publication at issue before it was uploaded “into CompuServe’s computer banks,” and therefore was not subject to publisher liability for the third party content." [page 3-4] It had also resulted in a damage award of $200 million dollars against Prodigy. [page 4]. Damage awards like these can wipe technologies off the map. If platforms had to fear the crippling effect that even one such award, arising from just one user, could have on their developing online services, it would dissuade them from being platforms at all. As the brief observes: The accretion of burdens would be especially harmful to smaller websites. Future startups, facing massive exposure to potential liability if they do not monitor user content and take responsibility for third parties’ legal compliance, would encounter significant obstacles to capital formation. Not unreasonably, some might abjure any business model reliant on third-party content. [page 26] Then there was also a third, related concern: according to the logic of Stratton Oakmont, which had distinguished itself from the earlier Cubby v. Compuserve case, unlike Compuserve, Prodigy had "sought to impose general rules of civility on its message boards and in its forums." [page 4]. The perverse incentive this case established was clear: Internet platforms should avoid even modest efforts to police their sites. [page 4] The essential math was stark: Congress was worried about what was going on the Internet. It wanted platforms to be an ally in policing it. But without protection for platforms, they wouldn't be. They couldn't be. So Cox joined with Senator Wyden to craft a bill that would trump the Stratton Oakmont holding. The result was the Internet Freedom and Family Empowerment Act, H.R. 1978, 104 Cong. (1995), which, by a 420-4 vote reflecting significant bipartisan support, became an amendment to the Communications Decency Act – Congress's attempt to address the less desirable material on the Internet – which then came into force as part of the Telecommunications Act of 1996. [page 5-6]. The Supreme Court later gutted the indecency provisions of the CDA in Reno v. ACLU, but the parts of the CDA at Section 230 have stood the test of time. [page 6 note 2]. The statutory language provided necessary relief to platforms in two important ways. First, it included a "Good Samaritan" provision, meaning that "[i]f an Internet platform does review some of the content and restricts it because it is obscene or otherwise objectionable, then the platform does not thereby assume a duty to monitor all content." [page 6]. Because keeping platforms from having to monitor was the critical purpose of the statute: All of the unique benefits the Internet provides are dependent upon platforms being able to facilitate communication among vast numbers of people without being required to review those communications individually. [page 12] The concerns were practical. As other members of Congress noted at the time, "There is no way that any of that any of those entities, like Prodigy, can take the responsibility [for all of the] information that is going to be coming in to them from all manner of sources.” [page 14] While the volume of users [back when Section 230 was passed] was only in the millions, not the billions as today, it was evident to almost every user of the Web even then that no group of human beings would ever be able to keep pace with the growth of user-generated content on the Web. For the Internet to function to its potential, Internet platforms could not be expected to monitor content created by website users. [page 2] Thus Section 230 established a new rule expressly designed to spare platforms from having to attempt this impossible task in order to survive: The rule established in the bill [...] was crystal clear: the law will recognize that it would be unreasonable to require Internet platforms to monitor content created by website users. Correlatively, the law will impose full responsibility on the website users to comply with all laws, both civil and criminal, in connection with their user-generated content. [But i]t will not shift that responsibility to Internet platforms, because doing so would directly interfere with the essential functioning of the Internet. [page 5] That concern for the essential functioning of the Internet also explains why Section 230 was not drawn narrowly. If Congress had only been interested in protecting platforms from liability for potentially defamatory speech (as was at issue in the Stratton Oakmont case) it could have written a law that only accomplished that end. But Section 230's language was purposefully more expansive. If it were not more expansive, while platforms would not have to monitor all the content it intermediated for defamation, they would still have to monitor it for everything else, and thus nothing would have been accomplished with this law: The inevitable consequence of attaching platform liability to user-generated content is to force intermediaries to monitor everything posted on their sites. Congress understood that liability-driven monitoring would slow traffic on the Internet, discourage the development of Internet platforms based on third party content, and chill third-party speech as intermediaries attempt to avoid liability. Congress enacted Section 230 because the requirement to monitor and review user-generated content would degrade the vibrant online forum for speech and for e-commerce that Congress wished to embrace. [page 15] Which returns to why Section 230 was intended to apply to transactional platforms. Congress didn't want to be selective about which types of platforms could benefit from liability protection. It wanted them all to: [T]he very purpose of Section 230 was to obliterate any legal distinction between the CompuServe model (which lacked the e-commerce features of Prodigy and the then-emergent AOL) and more dynamically interactive platforms. … Congress intended to “promote the continued development of the Internet and other interactive computer services” and “preserve the vibrant and competitive free market” that the Internet had unleashed. Forcing web sites to a Compuserve or Craigslist model would be the antithesis of the congressional purpose to “encourage open, robust, and creative use of the internet” and the continued “development of e-commerce.” Instead, it will slow commerce on the Internet, increase costs for websites and consumers, and restrict the development of platform marketplaces. This is just what Congress hoped to avoid through Section 230. [page 23-24] And it wanted them all to be protected everywhere because Congress also recognized that they needed to be protected everywhere in order to be protected at all: A website […] is immediately and uninterruptedly exposed to billions of Internet users in every U.S. jurisdiction and around the planet. This makes Internet commerce uniquely vulnerable to regulatory burdens in thousands of jurisdictions. So too does the fact that the Internet is utterly indifferent to state borders. These characteristics of the Internet, Congress recognized, would subject this quintessentially interstate commerce to a confusing and burdensome patchwork of regulations by thousands of state, county, and municipal jurisdictions, unless federal policy remedied the situation. [page 27] Congress anticipated that states and local authorities would be tempted to impose liability on platforms, and in doing so interfere with the operation of the Internet by forcing platforms to monitor after all and thus cripple their operation: Other state, county, and local governments would no doubt find that fining websites for their users’ infractions is more convenient than fining each individual who violates local laws. Given the unlimited geographic range of the Internet, unbounded by state or local jurisdiction, the aggregate burden on an individual web platform would be multiplied exponentially. While one monitoring requirement in one city may seem a tractable compliance burden, myriad similar-but-not-identical regulations could easily damage or shut down Internet platforms. [page 25] So, "[t]o ensure the quintessentially interstate commerce of the Internet would be governed by a uniform national policy" of sparing platforms the need to monitor, Congress deliberately foreclosed the ability of state and local authorities to interfere with that policy with Section 230's pre-emption provision. [page 10]. Without this provision, the statute would be useless: Were every state and municipality free to adopt its own policy concerning when an Internet platform must assume duties in connection with content created by third party users, not only would compliance become oppressive, but the federal policy itself could quickly be undone. [page 13] This pre-emption did not make the Internet a lawless place, however. Laws governing offline analogs to the services starting to flourish on the web would continue to apply; Section 230 simply prevented platforms from being held derivatively liable for user generated content that violated them. [page 9-10]. Notably, none of what Section 230 proposed was a controversial proposition: When the bill was debated, no member from either the Republican or Democratic side could be found to speak against it. The debate time was therefore shared between Democratic and Republican supporters of the bill, a highly unusual procedure for significant legislation. [page 11] It was popular because it advanced Congress's overall policy to foster the most beneficial content online, and the least detrimental. Section 230 by its terms applies to legal responsibility of any type, whether under civil or criminal state statutes and municipal ordinances. But the fact that the legislation was included in the CDA, concerned with offenses including criminal pornography, is a measure of how serious Congress was about immunizing Internet platforms from state and local laws. Internet platforms were to be spared responsibility for monitoring third-party content even in these egregious cases. A bipartisan supermajority of Congress did not support this policy because they wished to give online commerce an advantage over offline businesses. Rather, it is the inherent nature of Internet commerce that caused Congress to choose purposefully to make third parties and not Internet platforms responsible for compliance with laws generally applicable to those third parties. Platform liability for user-generated content would rob the technology of its vast interstate and indeed global capability, which Congress decided to “embrace” and “welcome” not only because of its commercial potential but also “the opportunity for education and political discourse that it offers for all of us.” [page 11-12] As the brief explains elsewhere, Congress's legislative instincts appear to have been born out, and the Internet today is replete with valuable services and expression. [page 7-8]. Obviously not everything the Internet offers is necessarily beneficial, but the challenges the Internet's success pose don't negate the policy balance Congress struck. Section 230 has enabled those successes, and if we want its commercial and educational benefit to continue to accrue, we need to make sure that the statute's critical protection remains available to all who depend on it to realize that potential. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
In the wake of the Trump FCC's attack on net neutrality last December (which formally takes effect on June 11), more than half the states in the country are now exploring their own net neutrality rules. Some states (like Oregon and Washington) have passed state laws, while others (like New York and Montana) have embraced new executive orders that limit ISP ability to strike state contracts if they violate net neutrality. All told, it's not exactly the outcome AT&T, Verizon, and Comcast lobbyists were hoping for, and it's a pretty solid indication they really didn't think this entire thing through particularly well. But at the moment, most eyes rest on California, where one of the tougher new state-level replacement laws just took a major step forward. Senator Scott Wiener’s SB 822 would prevent ISPs in California from engaging in blocking, throttling, or paid prioritization. The EFF has called the bill the "gold standard" for state-level net neutrality law. The proposal actually goes a bit further than the FCC rules it's intended to replace, in part because it more tightly polices things like zero rating and usage caps, which have long been used anti-competitively by incumbent ISPs as a way to make life more difficult for companies trying to elbow in on traditional TV revenues. Despite a major push by industry lobbyists, SB 822 last week was approved 23-12 by the California Senate and will now head to the state Assembly (sometime before the end of this month). If it passes there, it will be on to the desk of Governor Jerry Brown for signing. The Senate just passed my bill protecting #NetNeutrality in California, #SB822. I’m deeply appreciative for my colleagues’ support of this effort to protect the internet. If our federal govt won’t protect a free & open internet, the States must step in. Now on to the Assembly ... — Scott Wiener (@Scott_Wiener) May 30, 2018 California's law will be one to watch. Comcast, AT&T and Verizon successfully lobbied the Trump FCC to include language in their net neutrality repeal attempting to ban states from protecting broadband consumers, language companies like Charter are already using to try and tap dance out of lawsuits for substandard service. But the FCC's authority here is shaky, and some legal experts (like Stanford Professor Barbara van Schewick) have argued that when the FCC rolled back its Title II authority over ISPs, it also dismantled its right to tell these states what to do: "The bill is on firm legal ground. While the FCC’s 2017 Order explicitly bans states from adopting their own net neutrality laws, that preemption is invalid. According to case law, an agency that does not have the power to regulate does not have the power to preempt. That means the FCC can only prevent the states from adopting net neutrality protections if the FCC has authority to adopt net neutrality protections itself. But by re-classifying ISPs as information services under Title I of the Communications Act and re-interpreting Section 706 of the Telecommunications Act as a mission statement rather than an independent grant of authority, the FCC has deliberately removed all of its sources of authority that would allow it to adopt net neutrality protections. The FCC’s Order is explicit on this point. Since the FCC’s 2017 Order removed the agency’s authority to adopt net neutrality protections, it doesn’t have authority to prevent the states from doing so, either." ISPs have promised to "aggressively sue" any states that try to pass rules protecting net neutrality or broadband consumer privacy. And while this will only disgust the majority of Americans even more, the combination of limited competition and rubber stamp regulators like Ajit Pai means there's not much in the way of punishment for the tech policy equivalent of giving a giant middle finger to the nation's consumers, small businesses, and healthy competition. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
The government rarely likes to play fair in court. This is why we have the (repeatedly-violated) Brady rule (which forces the production of exonerative evidence) and other precedential decisions to guide the government towards treating defendants the way the Constitution wants them to be treated, rather than the way the government would prefer to treat them. In a case involving drug charges predicated on the distribution of synthetic marijuana, the government tried to keep testimony of a DEA chemist out of the hands of two charged defendants. The Fourth Circuit Court of Appeals says this isn't OK in a decision [PDF] that gets very weedy (why yes, pun intended) pretty quickly. That's the nature of synthetics -- and the nature of DEA determinations on controlled substances analogues. The two proprietors of Zencense -- Charles Ritchie and Benjamin Galecki -- decided to manufacture and distribute their own blend of spice, using XLR-11 and UR-144 as active ingredients. The DEA raided Zencense's Las Vegas production facility, charging the pair with conspiracy to distribute controlled substance analogues. The government alleges both synthetics are analogues of JWH-018, which is a controlled substance. Unfortunately, its own chemist disagrees with this assertion. The DEA’s determination that a substance is an analogue is made by its Drug and Chemical Evaluation Section (DRE). During the process of determining if UR-144 is an analogue, the DRE solicited the views of Dr. Arthur Berrier, a Senior Research Chemist with the DEA’s Office of Forensic Sciences. Dr. Berrier concluded that UR-144 is not substantially similar in chemical structure to JWH-018, which would mean that it is not outlawed by the Analogue Act. This means the distribution was only half as illegal as the government asserts. Or, possibly, not illegal at all, as this footnote portrays the government's assertions. All of the expert testimony in this case agreed that XLR-11 and UR-144 are indistinguishable, and the Government treats them as the same substance. If they're similar, and UR-144 isn't "substantially similar" to controlled substance JWH-018, the government doesn't have much of case left to prosecute. The charges hinge on the defendants' knowledge that the substances they manufactured were illegal analogues. But the DEA's chemist is on record stating that the substance Zencense emulated isn't actually a controlled substance. Upon learning this, the defendants sought to obtain the chemist's testimony. The government refused their request. The Government opposed the motion to compel, arguing that “some of the information sought [was] part of the deliberative process and is therefore privileged.” (J.A. 673). The district court denied the Defendants’ motion, “find[ing] that the denial of this Touhy request is appropriate as it would violate the Deliberative Process Privilege of the Drug Enforcement Agency to grant the subpoena”. As the court points out, this is a ridiculous position for the government to take. While the government made a proper claim of privilege, there's nothing privileged about the DEA chemist's assertions. Applying this framework, we readily conclude that the district court erred in concluding that the deliberative process privilege applies because, to the extent the privilege covers Dr. Berrier, the Government has waived any reliance on it. The Government has, by its own admission, provided Dr. Berrier’s opinion as Brady material in criminal cases involving XLR-11 and UR-144. See United States v. $177,844.68 in U.S. Currency, 2015 WL 4227948, *3 (D. Nev. 2015) (cataloguing cases). Moreover, Dr. Berrier recently testified in open court pursuant to a motion to compel in an analogue case involving the distribution of UR-144. See United States v. Broombaugh, 2017 WL 2734636 (D. Kan. 2017) (ordering the unsealing of Dr. Berrier’s testimony). Finally, Dr. Berrier’s opinion that UR-144 is not an analogue of JWH-018 is freely available online. See Federal Judicial Center, Litigating Synthetic Drug Cases, http://fln.fd.org/files/training/April%202015%20Handout.pdf, pp. 37-41 (last visited May 16, 2018). Therefore, Dr. Berrier’s opinion was accessible to everyone but the jurors in this case. As the court notes, compelling testimony is limited to that which is "favorable" and "material" to the defense. Clearly, Dr. Berrier's testimony is favorable, as it shows the analogue produced by the defendants was not identical to a controlled substance. As for the materiality of the testimony, the appeals court will let the district court decide. It seems extremely material, as the government's case rests on its accusations of manufacture of a controlled substance analogue, which is at odds with its own expert's assertions. Unfortunately, this isn't the end of the road for the defendants. Having already faced two trials (one mistrial, one resolved with an Allen charge delivered to a deadlocked jury), the convictions are vacated and the government now has a chance to potentially put the defendants on trial one more time. Even when the government apparently has it wrong, it's still given multiple chances to obtain a conviction. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
Meet Scapple, a freeform mind-mapping software that lets you easily record and find connections between your ideas. It’s designed to help you put all your ideas in one place, then draw logical conclusions about them. Trace lines or arrows between related ideas, easily share ideas with others, and move notes around and never run out of space. Whether you’re working on a business venture, blog, or tech project, Scapple is the tool you need to take your work to the next level. It's on sale for $9.99. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
Recent school shootings have led to heightened reactions from school officials and law enforcement. An over-correction of sorts -- thanks to the shooter in Florida having been brought to law enforcement's attention several times prior to the shooting -- has resulted in the arrest of hundreds of students across the nation. The problem isn't so much treating potential threats as credible until proven otherwise. The problem is there's so very little subtlety applied. Things that should not be perceived as threats are, and even when they're determined to be either unfounded or not actually a threat, some schools decide their misperceptions are more important than the reality of the situation. (h/t Reason) The graduating class of Truman High School in Independence, Missouri brainstormed senior pranks. Kylan Scheele came up with a pretty decent idea. He posted his school for sale on Craigslist. The ad read: Truman High School - $12725 Huge 20+ room facility. Newly build football field. Baseball Field to the SE. Newly added 4 modern day rooms. Has: Centralized air, heating, plumbing. Next to Walmart for convenience Huge parking lot, great for partygoers looking for somewhere to park Bigger than normal dinning room. Multi stove, oven, fridge and other appliances in the kitchen. Reason for sale is due to the loss of students coming up. Named after hometown resident U.S. President Harry S. Truman and his family. About as innocuous of a prank anyone could have played on the school, one would think. But one would probably not be Truman High School administration. They turned it over to law enforcement. Detectives with the Independence Police Department investigated the incident and found no probable cause or reason to pursue criminal charges. The had Scheele delete the post and advised him to talk to school adminstrators. “They [detectives] didn’t see a credible threat,” Clark said. “They all kind of had a little chuckle about it but they wanted him to understand you could see how other people could see it as a threat.” And how could people see this as a threat? Well, the school seized on one line of the faux ad: "loss of students coming up." Obviously, this referred to the pending graduation. The school, however, somehow read this to mean Scheele planned on harming the student body. That prompted the handover to police. And when it was handed back, the school doubled down on its "seriousness." We take student safety very seriously and appreciate the students and parents who brought this to our attention. Out of an abundance of caution, administrators and police investigated and determined there was not a credible threat. A student who makes a real or implied threat, whether it is deemed credible or not, will face discipline. Due to the heightened concern nationally with school violence, we have extra police officers for the remainder of the school year and will have additional officers at graduations for all of our high schools. Good lord. So, the non-threat the police considered non-threatening has resulted in Scheele's suspension and his ban from the graduation ceremony. The "implied threat" the school somehow read into a statement about graduating seniors is keeping one student from getting his diploma with his classmates. It's also resulted in a lawsuit [PDF]. The ACLU represented Scheele in his demand for a restraining order blocking the school from blocking him from picking up his diploma at the graduation ceremony. Filed May 25th, the court has already ruled in favor of the school. The school has also refused to back down, claiming the bogus ad caused "substantial disruption" and resulted in multiple parents retrieving their kids from school. (Wonder how much of that was due to the school informing parents it had turned over a "credible" threat to law enforcement?) As the lawsuit pointed out, there's no way the student intended to cause a disruption and no "reasonable" person could have imagined the outcome would have been school officials attempting to turn a satirical "for sale" ad into a criminal offense. The disruption was of the school's own making, but the punishment will be borne solely by the student who posted the ad. Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
For years now we've documented the shitshow that is broadband industry customer satisfaction. That shitshow is generally thanks to a continued lack of real competition in the space, which lets these companies not only mindlessly raise rates like it's going out of style, but it gives companies like Comcast the leeway to experiment with terrible, anti-competitive practices like arbitrary and punitive usage caps and overage fees. And that's of course before you get to the clown car that passes for customer service at many of these companies, which routinely makes headlines for all the wrong reasons. Year after year we witness a rotating crop of bizarre stories highlighting how terribly these entrenched monopolies treat their subscribers. And each year industry executives insist that they've learned the error of their ways and have dedicated themselves and their budgets to fixing the "consumer experience." Except because these companies all but own state and federal lawmakers-- and see virtually no competition in their markets (especially at higher speeds)--things never actually get better. Case in point: the American Customer Satisfaction Index has released their latest analysis of customer satisfaction with the broadband industry. And what they found isn't pretty. In short, every single major ISP but one saw a decline in customer satisfaction over the last year: Note that these scores are worse than every other industry the ACSI tracks, including the airline, insurance, and banking sectors. And these scores are even well below consumer satisfaction with many government agencies, including the IRS. Comcast in fact is the only company to see no change whatsoever (though its TV services saw a 1 point decline), which is still notable given its 2014 promise that the hiring of a customer experience VP and other well-hyped improvements were going to "revolutionize" the way Comcast consumers were treated. Other companies like Charter (Spectrum) are in absolute free fall, dropping 8% year over year thanks to the poor service, rate hikes and empty promises in the wake of the company's bungled $89 billion acquisition of Time Warner Cable and Bright House Networks. And while things like gigabit broadband get a lot of media hype, we've noted that the lack of competition driving this problem is only getting worse. Numerous telcos have all but given up on residential broadband to shift their focus toward video advertising and enterprise services. And as they refuse to upgrade millions of DSL subscribers they don't actually want, cable companies like Comcast and Charter are securing a greater monopoly over broadband than ever before. Some like to claim new wireless technologies (like 5G) will emerge to magically provide competition to these providers. But while 5G wireless will provide faster, lower-latency and more resillient networks, it won't fix the business data service monopoly that drives high prices and many of the competition issues in the wireless sector. Nor will it address the industry's plan to keep putting ma bell back together via an endless array of competition-reducing megamergers. And however promising 5G is, it's not a substitute for uncapped, fixed broadband -- especially in more rural areas and less affluent cities. While cable secures a growing monopoly over fixed-line broadband, monopoly ISPs (with the Trump administration's help) are gutting all FTC, FCC and state oversight over their regional monopolistic fiefdoms. All while regulators like Ajit Pai whisper sweet nothings about how eliminating popular consumer protections like net neutrality will magically improve sector investment and competition. Surely this all works out well for the consumer, right? Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
MuckRock is currently conducting a public records survey of prison telephone contracts. What it has secured so far will shock you, but only if you haven't been paying attention. There's nothing like a captive audience, and prisoners are the most captive of all. There's one way out via telephone and its routed through mercenary companies and the law enforcement agencies that love them. Why so much law enforcement love for telcos specializing in prison phones? Because money buys a lot of love. A recently-released contract for prison phone services in Bartow County, Georgia shows that the County receives a commission of 77% from its current provider of inmate communications, ICSolutions. And it's not 77% of some small amount. In this agreement, phones calls are $0.16/minute and billing for calls involves fees of $3-6 for payment processing. The contract is so profitable for both ICS and the sheriff's department that ICS installs the system for free and provides the county with $225,000 in grants in exchange for an auto-renewing contract that helps lock out competitors. In addition, the county collects 50% of video visitation and "inmate tablet usage" fees. This may be at the low end of prison phone contracts, as far as commissions go. Other records obtained by MuckRock show government agencies angling for higher percentages and larger payouts. The Bristol County Sheriff's Office sent out a handful of proposals with demands for anywhere from 58-72% of call revenue. Depending on contractor, the department would make $2-4 per call, along with a cut of other communications services provided by contractors. The end result is more than $2 million a year flowing directly from prisoners (and their families) into county coffers. Unsurprisingly, this sheriff's department is being sued for its high-cost prison phone system. Also unsurprising is the fact those profiting from these agreements are reluctant to talk about them. Beryl Lipton reports one sheriff's department is seeking to withhold documents by deploying a dubious public records exemption. According to the Laramie County Sheriff’s Department in Wyoming, a request for its contract with inmate phone service provider Inmate Calling Solutions (ICSolutions) cannot be made public because the agreement itself is consider a “trade secret.” The letter from the county attorney's office claims the agreement between the sheriff and ICS prevents the documents from being released. Supposedly, the wording says the entire agreement is "confidential" or a "trade secret" (the attorney's letter doesn't specify which). Even if true, private companies can't do business with government entities and expect all of their documentation to remain out of the public eye. If the wording is similar in other ICS contracts, it hasn't stopped multiple government agencies from turning over copies of their contracts with the company to records requesters. This appears to be a case of someone at the county level finding a loophole to keep requesters from finding out just how much the local sheriff is making on prison phone calls. Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
This week, our top comment comes in response to Charter's claims that a lawsuit over its terrible broadband is just the result of an evil tech conspiracy. One anonymous commenter suggested that maybe they aren't so crazy: I pretty sure there is a Google/Netflix cabal that is against Charter communications. Unfortunately for them the cabal is their customers who would like use Google and Netflix. In second place, we have an anonymous suggestion for how to deal with the problem of invasive drug searches that go nowhere: This should have been very easy for the court to get right: Did the medical personnel enter into the record a warrant, secured by Customs and Border Patrol, directing them to perform these procedures? If yes, medical personnel are immune and the suit goes after CBP because they were "just following orders." If no, medical personnel are liable. Simple. Motivates medical personnel to demand a warrant before performing procedure Creates naturally public paper trail For editor's choice on the insightful side, we start out with a response from Toom1275 to the WIPO blocking the Pirate Party while inviting a group whose website said it existed to battle space lizards: Well space lizards aren't that much more fictional than IP maximalism's ability to protect creativity. If you believe one is real, it isn't that much further of a leap to then accept the other. Next, we've got an anonymous comment that repurposes an anti-terrorist mantra in response to the government's prosecution of protesters: They hate us for our freedoms Over on the funny side, our first place winner is David with a response to comparisons between Europe and America: You cannot compare the Internet in Europe with the Internet in the U.S. Can you even imagine how many shootings there would be in Europe if they had Comcast? In second place, we've got a simple anonymous quip about how the lawyers in the Monkey Selfie case must have reacted to a judge's call for a do-over: I'll bet they went bananas For editor's choice on the funny side, we start out with a response from Ninja to the earlier comment about space lizards: To be fair space lizards do less harm to creativity than copyright maximalism. And finally, we've got another anonymous commenter pushing back against the idea of copyright that lasts "forever minus a day": "Whoa lets not be hasty there. Forever minus a second seems way more fair." -RIAA That's all for this week, folks! Permalink | Comments | Email This Story

Read More...
posted 15 days ago on techdirt
Five Years Ago This week in 2013, we took a look at a big intellectual property report that focused on fearmongering about Chinese IP theft (while asking the public to foot the bill), called for companies to be allowed to use malware against infringers, and proposed cutting off funding to the World Health Organization if it doesn't start prioritizing IP protection, for some reason. Meanwhile, Hollywood studios were trying to wipe Kim Dotcom's Mega off the web, the RIAA was denying that it stifles innovation (while facing opposition from the Internet Association over its attempts to wipe out DMCA safe harbors), and CBS was trying to deny that its direct threats to sue Aereo actually meant it would sue Aereo. Ten Years Ago This week in 2008, Viacom and YouTube were slugging it out in court while the former tried out some new anti-embedding arguments. The RIAA dropped its attack on the defunct Allofmp3, while ignoring the resurrection of the site under a different name, and ASCAP released a hugely problematic bill of supposed rights for artists. Metallica was trying to embrace the internet without offering any free downloads, and discovering that they had already squandered all their goodwill in that arena. And ACTA went from obscure trade agreement to a source of pushback and protests in record time. Fifteen Years Ago This week in 2003, eBay lost a patent lawsuit over the Buy It Now feature, leading to a scramble from other online retailers to buy up the patents in question. We saw early discussion of tech ideas like personal 3D printers and telepresence robots (oh, and anti-infringement watermarks on content). Microsoft settled its dispute with AOL with a $750-million payout. And a court solidified many of the problems with the DMCA by ruling that rightsholders don't have to investigate the sites that they target. Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
As in any country, the limits of free speech are determined by the ruling party. While we have a Constitution that (mostly) holds our representatives at bay, many countries only pay lip service to rights they have previously declared inviolable. Egypt's government has long suppressed dissent and strangled communications. It deployed an internet kill switch in 2011, cutting off access to millions of Egyptians. A regime change followed and the former president was fined for nuking the country's internet access. Despite this power shift, nothing much changed. The current government cares no more for dissent and criticism than the previous one. Egyptian journalist Wael Abbas, who exposed police brutality and government torture, has provided his fellow residents an invaluable service: an unfiltered, ground-level view of government atrocities. His work even resulted in the rare conviction of Cairo police officers. But he's fought censorship at home -- as well as abroad -- every step of the way. YouTube, Facebook, and Twitter have all suspended his accounts, supposedly for policy violations. Most of these were reversed after US activists intervened on his behalf, but his accounts are always just another perceived violation away from being shut down permanently. And that's just on the US side. Egypt's government has tried to silence him on the homefront, convicting him in 2010 for "providing telecommunications service to the public without permission of the authorities." That was under the previous regime -- the one that deployed an internet kill switch to disrupt the communications of its many critics and opponents. The new regime, as noted above, is no better. As Jillian York reports for the EFF, Abbas has been detained by Egyptian police, apparently for the crime of exposing government misdeeds. Abbas was taken at dawn on May 23 by police to an undisclosed location, according to news reports which quote his lawyer, Gamal Eid. The Arabic Network for Human Rights Information (ANHRI) reported that Abbas was not shown a warrant or given a reason for his arrest. He appeared in front of state security yesterday and was questioned and ordered by prosecutors to be held for fifteen days. According to the Association for Freedom of Thought and Expression (AFTE), Abbas was charged with “involvement in a terrorist group”, “spreading false news” and “misuse of social networks.” The details of the charges really don't matter. Much like "resisting arrest," the charges are catch-all crimes meant to show the charged the importance of kowtowing to public displays of power. Unfortunately, the prosecution -- if it evens needs the help -- will be using actions taken by US social media companies as evidence against Abbas. It seems clear that the messaging around Abbas' detention is that his arrest was connected to his posts on Facebook and Twitter, and that the prosecution and media are using his suspension by these services as part of the evidence for his guilt. This is more than merely unfortunate. US social media platforms have played a part in anti-government uprisings around the world. In some cases, platforms have exercised caution when dealing with accounts caught in the middle of government violence, taking extra steps to protect the humans behind pseudonymous accounts. But Abbas has received none of these protections and his documentation of government brutality has resulted in multiple suspensions. The self-proclaimed guardians of worldwide free speech are providing evidence to government censors with their sometimes careless moderation efforts. When you treat certain content as offensive and treat it with blanket moderation policies, you strip the "offensive" content of its context. In cases like this, blanket moderation could mean the difference between freedom and a lengthy prison sentence. If social media platforms want to continue to operate in countries where governments are openly oppressive, they need to do a much better job protecting those who expose government abuse. Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
The last time we discussed Illinois' Biometric Information Pirvacy Act, a 2008 law that gives citizens in the state rights governing how companies collect and protect their biometric data, it was when a brother/sister pair attempted to use the law to pull cash from Take-Two Interactive over its face-scanning app for the NBA2K series. In that case, the court ruled that the two could not claim to have suffered any actual harm as a result of using their avatars, with their real faces attached, in the game's online play. One of the chief aspects of the BIPA law is that users of a service must not find their biometric data being used in a way that they had not intended. In this case, online play with these avatars was indeed the stated purpose of uploading their faces and engaging in online play to begin with. But now the law has found itself in the news again, with a federal court ruling that millions of Facebook users can proceed under a class action with claims that Facebook's face-tagging database violates BIPA. Perhaps importantly, Facebook's recent and very public privacy issues may make a difference compared with the Take-Two case. A federal judge ruled Monday that millions of the social network’s users can proceed as a group with claims that its photo-scanning technology violated an Illinois law by gathering and storing biometric data without their consent. Damages could be steep — a fact that wasn’t lost on the judge, who was unsympathetic to Facebook’s arguments for limiting its legal exposure. Facebook has for years encouraged users to tag people in photographs they upload in their personal posts and the social network stores the collected information. The company has used a program it calls DeepFace to match other photos of a person. Alphabet’s cloud-based Google Photos service uses similar technology and Google faces a lawsuit in Chicago like the one against Facebook in San Francisco federal court. Both companies have argued that none of this violates BIPA, even when this face-data database is generated without users' permission. That seems to contradict BIPA, where fines between $1,000 and $5,000 can be assessed with every use of a person's image without their permission. Again, recent news may come into play in this case, as noted by the lawyer for the Facebook users in this case. “As more people become aware of the scope of Facebook’s data collection and as consequences begin to attach to that data collection, whether economic or regulatory, Facebook will have to take a long look at its privacy practices and make changes consistent with user expectations and regulatory requirements,” he said. Now, Facebook has argued in court against this moving forward as a class by pointing out that different users could make different claims of harm, impacting both the fines and outcomes of their claims. While there is some merit to that, the court looked at those arguments almost purely as a way for Facebook to try to get away from the enormous damages that could potentially be levied under a class action suit, and rejected them. As in the Take-Two case, Facebook is doing everything it can to set the bar for any judgement on the reality of actual harm suffered by these users, of which the company claims there is none. The Illinois residents who sued argued the 2008 law gives them a “property interest” in the algorithms that constitute their digital identities. The judge has agreed that gives them grounds to accuse Facebook of real harm. Donato has ruled that the Illinois law is clear: Facebook has collected a “wealth of data on its users, including self-reported residency and IP addresses.” Facebook has acknowledged that it can identify which users who live in Illinois have face templates, he wrote. We've had our problems with class actions suits in the past, but it shouldn't be pushed aside that this case has the potential for huge damages assessed on Facebook. It's also another reminder that federal privacy laws are in sore need of modernization, if for no other reason than to harmonize how companies can treat users throughout the United States. Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
A recent Fourth Circuit Appeals Court decision found government agents at US borders need something more than the nothing currently required to perform searches of electronic devices. Cursory searches without suspicion are still fine in the Constitution-free zone, but forensic searches of cellphones need, at minimum, reasonable suspicion. This decision aligned the Fourth with the Ninth Circuit, where it was also determined forensic device searches require some sort of suspicion, even if performed at the border. A case out of Massachusetts (First Circuit) challenging a suspicionless device search has been allowed to move forward, possibly bringing another circuit into the mix and deepening the split. The Eleventh Circuit Appeals Court, however, has sided with the government and against citizens' privacy. It has upheld the lower court's determination that border device searches require no reasonable suspicion, no matter what the Supreme Court said in its Riley decision, which created a warrant requirement for phone searches. (via Jake Laperruque, Brad Heath) Karl Touset had his devices searched at the Atlanta airport after returning from an overseas trip. This followed some investigatory work by the government which suggested Touset might be involved in child pornography. The detainment and search was also prompted by money transfer service Xoom, which reported several people for making "frequent low money transfers" to people in "source countries" for child porn. Touset was met by CBP agents on arrival. Manual searches of his two phones revealed nothing, but CBP seized Touset's laptops and external hard drives. Those were forensically searched and child porn was discovered. These warrantless searches were challenged by Touset, but the Eleventh Circuit [PDF] immediately shuts down this line of reasoning by citing the Supreme Court. The Supreme Court has never required reasonable suspicion for a search of property at the border, however non-routine and intrusive, and neither have we. Arguing that devices that hold thousands of pieces of personal info doesn't help. Nor has it “been willing to distinguish . . . between different types of property.” Neither does pointing out the invasiveness of a forensic search, which can recover long-deleted files or other electronic detritus. And it rejected a judicial attempt to distinguish between “routine” and “nonroutine” searches and to craft “[c]omplex balancing tests to determine what [constitutes] a ‘routine’ search of a vehicle, as opposed to a more ‘intrusive’ search of a person.” We have been similarly unwilling to distinguish between different kinds of property. Going from there, the Appeals Court says the Fourth Amendment doesn't apply at the border -- no matter what the Supreme Court justices may have said about the ubiquity of devices capable of storing people's "entire lives." We see no reason why the Fourth Amendment would require suspicion for a forensic search of an electronic device when it imposes no such requirement for a search of other personal property. Just as the United States is entitled to search a fuel tank for drugs, see Flores-Montano, 541 U.S. at 155, it is entitled to search a flash drive for child pornography. And it does not make sense to say that electronic devices should receive special treatment because so many people now own them or because they can store vast quantities of records or effects. The Appeals Court acknowledges its split with the Fourth and Ninth Circuits before moving on to point to its own precedent as being the correct conclusion. We are unpersuaded. Although the Supreme Court stressed in Riley that the search of a cell phone risks a significant intrusion on privacy, our decision in Vergara made clear that Riley, which involved the search-incident-to-arrest exception, does not apply to searches at the border. 884 F.3d at 1312 (“[T]he Supreme Court expressly limited its holding to the search-incident-to-arrest exception.”). And our precedent considers only the “personal indignity” of a search, not its extensiveness. Vega-Barvo, 729 F.2d at 1346. Again, we fail to see how the personal nature of data stored on electronic devices could trigger this kind of indignity when our precedent establishes that a suspicionless search of a home at the border does not. And it appears the Eleventh Circuit has reached this conclusion simply because it has strong feelings about the contraband discovered. Indeed, if we were to require reasonable suspicion for searches of electronic devices, we would create special protection for the property most often used to store and disseminate child pornography. This ignores the fact that electronic devices are most often used to store and disseminate almost everything -- most of it legal. This is the court refusing to even slightly raise the bar for invasive forensic searches just because it doesn't like this particular appellant. This decision allows the government to root around in everyone's personal papers without a warrant just because some people may carry illicit goods across the border. This isn't a rational reason for refusing to even consider raising the bar to reasonable suspicion (which the agents had in this case). This feels more like an emotional decision, rather than one neutrally-applied, and it does nothing to protect millions of innocent travelers from their government. Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
The TSA is the worst. Super-secret watchlists can keep people from flying -- people deemed too dangerous to travel but not dangerous enough to arrest. This isn't the TSA's fault. Not these lists. Those are maintained by agencies who could possibly cobble together enough intel to build a flimsy case against these "dangerous" would-be travelers. The TSA, however, maintains its own database of travelers. It can't necessarily keep them from boarding airplanes, but it can give agents a heads up that the person in the queue probably needs to be detained and hassled. [via Boing Boing] The Transportation Security Administration has created a new secret watch list to monitor people who may be targeted as potential threats at airport checkpoints simply because they have swatted away security screeners’ hands or otherwise appeared unruly. A five-page directive obtained by The New York Times said actions that pose physical danger to security screeners — or other contact that the agency described as “offensive and without legal justification” — could land travelers on the watch list, which was created in February and is also known as a “95 list.” It's an agency shitlist, and only the TSA knows who's on it. This list doesn't contain people who've actually assaulted agents, but people who've expressed their displeasure with intrusive gropings through words or non-violent deeds. The agency's official statements make it clear this is an arbitrary way to punish travelers who make agents unhappy, noting that it neither requires "injury" to a TSA employee nor the intent to do so. Instead, the list contains anyone who presents a "challenge" to the "safe and effective completion of screening." That's about the end of the TSA's honesty on the matter, however. So far, the names of fewer than 50 people have been put on the watch list, said Kelly Wheaton, a T.S.A. deputy chief counsel. But two other government security officials who are familiar with the new watch list, describing it on the condition of anonymity because they were not authorized to discuss it, said that the number of names on the list could be higher, with travelers added daily. Without evidence, the TSA claims a whole 34 of its screeners were "assaulted" last year. Keep in mind this number pales in comparison to the millions of travelers screened every year. The fact that this happened eight more times last year than it did the year before (26 in 2016) does not demonstrate the need for a special list of argumentative travelers. Also keep in mind the TSA's definition of "assault" -- much like law enforcement's -- covers actions or words that do cause "injury" and may have been committed with zero intent to cause harm. On top of the seemingly punitive motivations for creating the "95 list", there's the fact that once you're on this list -- like other government lists targeting travelers -- you may never come off. The directive obtained by The Times does not specify how members of the public can appeal being included on the list. Just like all the other travel-related watchlists, then. Great. So, the TSA can freely antagonize travelers and slap them on a watchlist if they respond antagonistically. I guess we can mark this down as a win for terrorists because it sure doesn't feel like a win for Americans. Permalink | Comments | Email This Story

Read More...