posted 16 days ago on techdirt
A question that is almost always ignored when crafting legislation is "How will this new law be abused?" In the case of Spain's horrific Gag Law (officially [and hilariously] known as the "Citizen Security Law"), the answer is, "As much as possible." Just a couple of weeks away from a Spanish citizen being fined for calling his local police force "slackers," a Spanish woman has been fined for posting a picture of police car parked in a handicapped spot to her Facebook page. A Spanish woman has been fined €800 (£570) under the country’s controversial new gagging law for posting a photograph of a police car parked illegally in a disabled bay. The unnamed woman, a resident of Petrer in Alicante, south-east Spain, posted the photo on her Facebook page with the comment “Park where you bloody well please and you won’t even be fined”. The police tracked her down within 48 hours and fined her. If nothing else, the new law has reset law enforcement priorities. If law enforcement is insulted, the perpetrator needs to be tracked down before the trail goes cold. According to the original report at Petreraldia.com, differing narratives have emerged. One version of the incident says the officer who parked in the handicapped spot approached the photographer and explained the situation, apparently hoping to prevent a disparaging upload. If so, it didn't take. Another version says the uploader called to apologize to the police, presumably to ward off a citation. If so, that didn't take. And yet another version says there was no interaction between police and the photographer until they showed up at her home to hand her a ticket. What really happened isn't important, because there's the Official Police Narrative. The spokesman for the police informed Petreraldia that "in an emergency" police are allowed to park wherever they want, so as to expedite the apprehension of suspects. The "emergency" behind this illegal parking job? An "incident of vandalism in a nearby park." And, of course, the only other official remnant of this one-two punch of exemplary policework is the €800 ticket. It seems the police -- if they felt so demeaned by the Facebook post (which was swiftly removed by the original poster) -- could have asked for an apology, rather than €800. Or the department could have offered its explanation of the situation (as it did!), rather than fine the citizen. But the law is the law, and as such, must be abused to the fullest extent allowable. Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
Last month, you may recall, the news broke that the "dating site for people who want to cheat on their spouse," Ashley Madison, had its systems hacked, and all its data leaked. For rather obvious reasons, this had a lot of people rather worried about what would be revealed. However, the company insisted that there was no problem at all, because it had used the DMCA to take down all leaked copies. We pointed out how ridiculous this was on multiple levels. First, that's not what the DMCA is for, and as embarrassing as this was for Ashley Madison's parent company Avid Life Media, it does not hold the copyright in such data. Second, the idea that this would actually stop the data from reaching the public was ludicrous. And... it took longer than expected, but less than a month later, the data file has leaked online, and you can bet that lots of people -- journalists, security researchers, blackmailers and just generally curious folks -- have been downloading it and checking it out. Maybe, next time, rather than claiming copyright, the company will do a better job of protecting its systems.Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
And away we go. Techdirt (myself specifically) has been talking for some time about the impending expansion of major sports streaming options as the cord-cutting trend has continued. It only makes sense: leagues and marketers will go where the audience is. The most recent trend started slowly with the FCC voting to end its blackout rule. That decision was important for streaming, because one of the dumbest ideas that migrated over from broadcast and cable television was the idea that local blackouts of broadcasts and streams were in any way a good idea. Even as the NFL, NBA, NHL and MLB all have incrementally increased streaming options, those efforts have continued to be hampered by local blackout restrictions. Well, Major League Baseball just took a giant step over the blackout line and is now effectively straddling it, announcing that local streaming will be available in fifteen markets in the 2016 season. There is no specific timetable for a potential announcement of a deal between FOX and MLB. The two sides hope to complete the agreement around the end of this season, which would give the league and RSNs a full offseason to market the availability of the new local streams before Opening Day 2016. MLB Commissioner Rob Manfred, working with the league's president of business and media, Bob Bowman, has made in-market baseball streaming a key league priority, including personally participating in several negotiating sessions. Per the above, this specific deal is going to be done with MLB teams that have broadcasting deals with Fox. But don't think for a single moment that that's where it ends. Even if MLB can't get similar deals in place for the other half of teams in the league, which would fully free up the fantastic MLB.TV product for local streaming, any modicum of success that Fox has with this program will be immediately adopted by the other broadcasters. They really don't have a choice. Cord-cutting isn't going away and it's been professional and college sports that have long kept subscribers tethered. The trickle of streaming options in sports has been turning into more of a deluge, and the cable industry should be expecting some tough times ahead in the next, oh, say three to five years. Because if Manfred has this on his priority list for MLB, please believe that the commissioners in the other leagues have it on theirs as well. And when sports streaming really gets going, it's the end of cable as we currently know it. Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
It used to be only the people wearing tinfoil hats that were worried about satellites flying above us all the time. However, satellite technology is getting cheaper and easier to access, and more satellites are looking down at us than are looking at the stars. No one should be worried about a bunch of Helicarriers targeting everyone just yet, but we're making progress towards a sky filled with some pretty advanced technology. China is building yet another GPS network called Beidou. The US has GPS. Russia has GLONASS. The EU has Galileo. Do we really need another one? [url] Nanosatellites could provide an always-on connection for ground sensors, making disaster relief efforts more efficient and providing tons of useful data. Terran Orbital wants tiny satellites orbiting at 600 km above us to provide reliable (not necessarily fast) wireless connections to all kinds of devices. [url] Satellite imagery can be used for a bunch of business intelligence services. Retail parking lots can be monitored during prime purchasing seasons. Mining operations and construction projects could be tracked to ensure foreign companies are making the progress they say they're making. (And drones will be spying on everyone!) [url] After you've finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
If you still watch traditional TV chances are you've increasingly been accosted with blacked out content and annoying ticker warnings as cable operators and broadcasters bicker over programming contracts. Whether it's Fox News's ugly fight with Dish, DirecTV's feud with The Weather Channel, or the Cablevision - News Corporation fight that blacked out the World Series a few years back, these obnoxious disputes have only gotten uglier over the last few years as programming costs have soared and the cable and broadcast industry works tirelessly to ensure its looming irrelevance. For the consumer, these fights usually go something like this: you're bombarded with on-screen tickers and ads from both your cable operator and the broadcaster telling you the other guy is being a greedy villain during a contract standoff. After the programming contract expires, content you're paying for gets blacked out (which you're of course never given a refund for) by one side or the other in the hopes of pushing negotiations along. After a month or two the two sides then ultimately strike a confidential new programming deal. A few weeks later your cable bill sees a price hike -- potentially your second of the year. It's kind of a lose-lose scenario for consumers, who get used as public relations pinatas (call your cable operator to complain!), lose access to content they're paying for, and then get accosted with an endless series of rate hikes. For the last few years, the FCC has generally had a hands off approach to these disputes (boys will be boys, and all that), but as they've gotten uglier and consumers have increasingly been railroaded, pressure has mounted for the regulator to at least do something. According to a new blog post by FCC boss Tom Wheeler, the FCC head says he's looking at a number of ideas that could help ease the pain of these idiotic standoffs. Maybe. One, the FCC is considering lifting rules that prohibit cable companies from simply piping in another region's local broadcast affiliate, allowing them to at least provide customers with some version of ABC, NBC, Fox or CBS while negotiations continue. The agency also suggests it's going to look more closely at the very definition of "good faith negotiations," since these blackouts make it clear there's not much of that actually going on: "The NPRM currently before the Commission undertakes a robust examination of practices used by parties in retransmission consent negotiations, as required by Congress. The goal of the proposed rulemaking is to ensure that these negotiations are conducted fairly and in a way that protects consumers." Since these are private business contracts, the FCC injecting itself into these negotiations is going to piss off free marketeers and the cable and broadcast industry to no end, but the industry brought it upon itself by behaving like absolute jackasses for the last few years. Not only have they consistently held traditional TV customers hostage, some broadcasters have even blocked access to online content in petulant responses to contract feuds. In its fight with Cablevision in 2010, News Corporation went so far as to get Hulu to block Cablevision broadband customers from accessing all Fox content. Viacom did something similar in 2014 when it blocked all CableONE broadband customers from accessing Viacom content online, even if those broadband users were paying for TV from another provider. Let that sink in a little bit: you pay for Viacom content through, say, DirecTV, but you can't access that content through your broadband provider because the cable arm of your ISP is engaged in a TV content contract dispute. And while broadcasters do deserve the lion's share of the blame for soaring programming rates, the cable providers aren't faultless since they're quick to impose rate hikes of their own (modem fees, broadcast TV fees, set top rental charges, charges to pay over the phone) as often as possible. Layer this lost content and annoyance on to existing high prices and the industry's absolutely legendary reputation for atrocious customer service, and you've uncovered the industry's ingenious plan to more efficiently dig its own grave on the eve of the cord cutting revolution.Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
Copyright expert and professor Pam Samuelson, one of the most respected scholars of copyright law, has published a short paper explaining what she calls the "three fundamental flaws in CAFC's Oracle v. Google decision." As you may recall, that ruling was a complete disaster, overturning a lower court decision that noted that application programming interfaces (APIs) are not copyrightable, because Section 102 of the Copyright Act pretty clearly says that: In no case does copyright protection for an original work of authorship extend to any idea, procedure, process, system, method of operation, concept, principle, or discovery, regardless of the form in which it is described, explained, illustrated, or embodied in such work. But CAFC got super confused, and basically ignored 102 while misunderstanding what an API actually is. After the White House itself got confused, the Supreme Court refused to hear the case. This means that the CAFC ruling stays in place, despite it being at odds with lots of other courts. And this might not be a huge problem, since most copyright cases won't go to CAFC. The only reason the Oracle case went to CAFC was because it started out as a patent case, and CAFC gets all patent appeals, even if the appeal has nothing to do with patents. Except... of course, now there's incentive to toss in a bogus patent complaint along with a questionable "interface copyright" complaint just to get it into CAFC's jurisdiction. Samuelson's paper is a good read (and we'll get to it), but I'd actually argue it's a bit too tame, and leaves out the really fundamental flaw in the CAFC ruling and in the White House brief: these non-programmers don't realize that an API is not software. Almost all of the mistakes stem from this simple fact. They assume that an API is software. And this is highlighted very clearly in the CAFC ruling where they quote Pam Samuelson out of context and then completely miss what she's actually saying. Here's from that ruling: Google argues that “[a]fter Sega, developers could no longer hope to protect [software] interfaces by copyright . . . Sega signaled that the only reliable means for protecting the functional requirements for achieving interoperability was by patenting them.” ... (quoting Pamela Samuelson, Are Patents on Interfaces Impeding Interoperability...). And, Google relies heavily on articles written by Professor Pamela Samuelson, who has argued that “it would be best for a commission of computer program experts to draft a new form of intellectual property law for machine-readable programs.” Pamela Samuelson, CONTU Revisited: The Case Against Copyright Protection for Computer Programs in Machine-Readable Form.... Professor Samuelson has more recently argued that “Altai and Sega contributed to the eventual shift away from claims of copyright in program interfaces and toward reliance on patent protection. Patent protection also became more plausible and attractive as the courts became more receptive to software patents.”... Although Google, and the authority on which it relies, seem to suggest that software is or should be entitled to protection only under patent law—not copyright law— several commentators have recently argued the exact opposite. See Technology Quarterly, Stalking Trolls, ECONOMIST, Mar. 8, 2014, http://www.economist. com/news/technology-quarterly/21598321-intellectualproperty- after-being-blamed-stymying-innovationamerica- vague (“[M]any innovators have argued that the electronics and software industries would flourish if companies trying to bring new technology (software innovations included) to market did not have to worry about being sued for infringing thousands of absurd patents at every turn. A perfectly adequate means of protecting and rewarding software developers for their ingenuity has existed for over 300 years. It is called copyright.”); Timothy B. Lee, Will the Supreme Court save us from software patents?, WASH. POST, Feb. 26, 2014, 1:13 PM, http://www.washingtonpost.com/blogs/the-switch/wp/ 2014/02/26/will-the-supreme-court-save-us-from-softwarepatents/ (“If you write a book or a song, you can get copyright protection for it. If you invent a new pill or a better mousetrap, you can get a patent on it. But for the last two decades, software has had the distinction of being potentially eligible for both copyright and patent protection. Critics say that’s a mistake. They argue that the complex and expensive patent system is a terrible fit for the fast-moving software industry. And they argue that patent protection is unnecessary because software innovators already have copyright protection available.”). But this is just wrong. If you actually look at Samuelson's quotes, she's talking about interfaces not software. Notice in every quote she is not actually talking about the software itself, but "interfaces," "functional requirements" and "program interfaces." The absolute worst is the first quote, where Samuelson writes "interfaces" and CAFC inserts a "[software]" to imply that it's the same thing. It's not. The two paragraphs are not actually at odds. It is entirely reasonable to argue that interfaces shouldn't be protected by copyright (thanks to Section 102) and that software should not be patentable. It only looks like they're disagreeing if you're confused and you think that an API is the same thing as the software itself. But that's like saying a recipe is the same as a meal or that a dictionary is the same as a novel that uses those words. It's not the same thing. So while Samuelson's new paper is great, I still feel like she holds back on that key issue, which is so just blatantly wrong, and seems to underline why non-technical people (including the judges in this case) got so confused. Of course software is copyrightable. The argument is over whether or not an API necessary for interoperability is copyrightable. And, as Samuelson's paper notes, it had been widely accepted prior to the CAFC ruling that the answer is no because they're "procedures, processes, systems and methods" under Section 102. A second flaw was the CAFC’s overbroad view of the extent to which the “structure, sequence and organization” (SSO) of computer programs are protectable by copyright law. During the 1980s, some courts regarded program SSO as having a broad scope of protection under copyright law. But in the last two and a half decades, courts and commentators have recognized that the SSO concept is too imprecise and misleading to be useful in software copyright cases. The SSO concept does not help courts make appropriate distinctions between protectable and unprotectable structural elements of programs. Procedures, processes, systems, and methods of operation, almost by definition, contribute to the SSO of programs that embody them. However, this does not make those elements protectable by copyright. The design of many program structures, including APIs, is inherently functional and aimed at achieving technical goals of efficiency. This disqualifies them as protectable expression under U.S. law. Anyway, the rest of the paper is a good read, and hopefully it means that eventually this issue will get back to the Supreme Court -- and one hopes, at that time, someone can at least get through to them that an API is not software.Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
We've already written a few articles about the confirmation that AT&T is going above and beyond what's required by the law to be a "valued partner" of the NSA in helping with its surveillance campaign. While it's long been known that AT&T was giving fairly direct access to its backbone (thank you Mark Klein!), the latest released documents provide much more detail -- including that AT&T often does the initial "sifting" before forwarding content it finds to the NSA. To some NSA apologists, this is proof that the NSA isn't so bad, because it doesn't have full unencumbered access to everything, but rather is relying on AT&T to do the searching and then handing over what it finds. Of course, as the documents showed, it's only in some cases that AT&T searches first, in others it appears that the NSA does, in fact, have full access. But, still, as Cindy Cohn at the EFF is noting, if the NSA thinks that having AT&T sift first and then voluntarily hand stuff over somehow absolves it of violating the 4th Amendment with these collections, well, then the NSA is wrong. First some law: the Fourth Amendment applies whenever a "private party acts as an ‘instrument or agent’ of the government." This rule is clear. In the Ninth Circuit, where our Jewel v. NSA case against mass spying is pending, it has been held to apply when an employee opens someone's package being shipped in order to obtain a DEA reward (US v. Walther), when a hotel employee conducts a search while the police watch (US v. Reed), and when an airline conducts a search under a program designed by the FAA (United States v. Davis), among others. The concept behind this rule is straightforward: the government cannot simply outsource its seizures and searches to a private party and thereby avoid protecting our constitutional rights.  It seems that the NSA may have been trying to do just that. But it won't work. Given that the EFF is already challenging this collection in the Jewel v. NSA case, it seems like the latest leak may be somewhat helpful.Permalink | Comments | Email This Story

Read More...
posted 16 days ago on techdirt
We've already written a few articles about the confirmation that AT&T is going above and beyond what's required by the law to be a "valued partner" of the NSA in helping with its surveillance campaign. While it's long been known that AT&T was giving fairly direct access to its backbone (thank you Mark Klein!), the latest released documents provide much more detail -- including that AT&T often does the initial "sifting" before forwarding content it finds to the NSA. To some NSA apologists, this is proof that the NSA isn't so bad, because it doesn't have full unencumbered access to everything, but rather is relying on AT&T to do the searching and then handing over what it finds. Of course, as the documents showed, it's only in some cases that AT&T searches first, in others it appears that AT&T does, in fact, have full access. But, still, as Cindy Cohn at the EFF is noting, if the NSA thinks that having AT&T sift first and then voluntarily hand stuff over somehow absolves it of violating the 4th Amendment with these collections, well, then the NSA is wrong. First some law: the Fourth Amendment applies whenever a "private party acts as an ‘instrument or agent’ of the government." This rule is clear. In the Ninth Circuit, where our Jewel v. NSA case against mass spying is pending, it has been held to apply when an employee opens someone's package being shipped in order to obtain a DEA reward (US v. Walther), when a hotel employee conducts a search while the police watch (US v. Reed), and when an airline conducts a search under a program designed by the FAA (United States v. Davis), among others. The concept behind this rule is straightforward: the government cannot simply outsource its seizures and searches to a private party and thereby avoid protecting our constitutional rights.  It seems that the NSA may have been trying to do just that. But it won't work. Given that the EFF is already challenging this collection in the Jewel v. NSA case, it seems like the latest leak may be somewhat helpful.Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
The US government's desire to keep terrorists off airplanes has resulted in heavily-populated "watchlists" -- lists short on due process and long on hunches. The TSA, in particular, has embraced a mixture of borrowed ideas and junk science to staff airports with Behavioral Detection Officers, who attempt to keep terrorists from boarding planes by looking for any number of vague indicators. The end result is a billion-dollar program with the accuracy of a coin flip. That's the physical version of the government's "predictive policing" security efforts. The same sort of vague quasi-science is used to populate its "no fly" list. The U.S. government’s reliance on “predictive judgments” to deprive Americans of their constitutionally protected liberties is no fiction. It’s now central to the government’s defense of its no-fly list—a secretive watch list that bans people from flying to or from the United States or over American airspace—in a challenge brought by the American Civil Liberties Union. Court filings show that the government is trying to predict whether people who have never been charged, let alone convicted, of any violent crime might nevertheless commit a violent terrorist act. Because the government predicts that our clients—all innocent U.S. citizens—might engage in violence at some unknown point in the future, it has grounded them indefinitely. The court itself found the "no fly" list's redress processes (or lack thereof) to be unconstitutional, as it was basically unchallengeable by listed travelers. (And the only sure way for a person to discover they were on the "no fly" list was to buy a ticket to somewhere and attempt to board a plane.) According to the court, the list and its lack of proper redress was a perfect storm of civil liberties violations. In summary, on this record the Court concludes the DHS TRIP process presently carries with it a high risk of erroneous deprivation in light of the low evidentiary standard required for placement on the No-Fly List together with the lack of a meaningful opportunity for individuals on the No-Fly List to provide exculpatory evidence in an effort to be taken off of the List. While the DHS has slightly improved its redress process for those who find themselves on the list, it hasn't made any changes to its dubious selection standards. In fact, it's made this part of the process worse. [T]he U.S. government launched its predictive judgment model without offering any evidence whatsoever about its accuracy, any scientific basis or methodology that might justify it, or the extent to which it results in errors. In our case, we turned to two independent experts to evaluate the government’s predictive method: Marc Sageman, a former longtime intelligence community professional and forensic psychologist with expertise in terrorism research, and James Austin, an expert in risk assessment in the criminal justice system. Neither found any indication that the government’s predictive model even tries to use basic scientific methods to make and test its predictions. As Sageman says, despite years of research, no one inside or outside the government has devised a model that can predict with any reliability if a person will commit an act of terrorism. And there go any redress improvements. To get off the list, a person must convince the government he or she won't commit a criminal act that its "predictive model" has determined they might. At least in the FBI's terrorist "investigations," there's a minimum of intent and activity. In the case of would-be travelers like the ACLU's clients, there's nothing more than a predictive model spitting out probabilities. In all likelihood, the predictive model used by the US government is based on more than faulty science. It's also based on faulty reasoning. As Doug Saunders at the Globe and Mail points out, analysts researching tracking and surveillance of would-be terrorists are finding the usual presumptions are mostly wrong. Analysts began looking at the work of Paul Gill, a criminologist at the University College of London. In a highly influential 2014 paper titled “Bombing Alone: Tracing the Motivations and Antecedent Behaviours of Lone-Actor Terrorists,” Dr. Gill and his colleagues analyzed known terrorists not by what they thought or where they came from, but by what they did. In the weeks before an attack, terrorists tend to change address (one in five) or adopt a new religion (40 per cent of Islamic terrorists and many right-wing terrorists did so). And they start talking about violence: 82 per cent told others about their grievance; almost seven in 10 told friends or family that they “intended to hurt others.” A huge proportion had recently become unemployed, experienced a heightened level of stress or had family breakdowns. And most had done things that looked like planning – including contacting known violent groups. Predictive modeling often looks like "thought policing," but that is of little use, apparently. There are other indicators that are far more telling, but these factors aren't being given proper weight by the models in use. Saunders notes that there has been a shift in modeling over the past couple of years, thanks to research from analysts like Paul Gill, but that shift in predictive factors hasn't slowed legislators from demanding even more futile, invasive "thought policing." Unfortunately, governments, including Canada’s, are behind the curve: Just as their terrorism experts and security employees have abandoned policies which resemble the policing of thoughts, they're passing disturbing laws to make such obsolete practices easier. The DHS may be using smarter modeling now, but it's not as though it's been examining its existing list to remove those who don't match the vague criteria. Instead, it's only responding to challenges made by travelers who find themselves on the list, and even then, it may still withhold information a listed flyer could use to challenge their status if the agency deems the release of such info a threat to national security. The nod to due process leads to an exchange of information and paperwork with the government, but there's nothing adversarial about the redress procedure. It's your word against theirs, with the agency -- not a court -- making the final determination as to whether a person can ever board an airplane. Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
Grab the Train Simple WordPress 4.0 Fundamentals Course for $25 and start mastering the basics behind one of the most popular open-source platforms. Learn how to set up self-hosted sites and blogs and how to customize them with plug-ins and themes. You'll also learn how to manage and update your site and how to utilize WordPress' many features to create a site you've always wanted. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
Back in 2011, This American Life toured an office building in Marshall, Texas, and found eerie hallways of empty offices that serve as the 'headquarters' of patent trolls. For many, that was the first introduction to the strange world of the Eastern District of Texas, its outsized role in patent litigation and, especially, its effective support of the patent troll business model. Trolls love the Eastern District for its plaintiff-friendly rules, so they set up paper corporations in the district as an excuse to file suit there. Meanwhile, defendants find themselves dragged to a distant, inconvenient, and expensive forum that often has little or no connection to the dispute. The remote district's role has only increased since 2011. The latest data reveals that the Eastern District of Texas is headed to a record year. An astonishing 1,387 patent cases were filed there in the first half of 2015. This was 44.4% of all patent cases nationwide. And almost all of this growth is fueled by patent trolls. Happily, lawmakers have finally moved to restore some balance. The latest version of the Innovation Act in the House includes language that would make it much harder for trolls to file in the Eastern District of Texas. The proposal goes under the decidedly mundane name of "venue reform" but it could actually be crucial to the effort fix our broken patent system. The Luckiest Court in the Universe The Eastern District of Texas is a federal court district running along the Texas-Louisiana border. The district covers a largely rural area without much of a technology industry. It is just one of 94 federal district courts. (Some states, like Vermont, have a single federal district, while others, like Texas and California, have as many as four.) If patent cases were distributed evenly among the federal district courts, each one would have received about 33 cases so far this year – a far cry from the 1,387 filings in the Eastern District of Texas. Accident? We don't think so. In fact, we ran a calculation to see how likely it is that at least 1,387 of 3,122 patent cases might end up there by chance. This was the result: This probability is so vanishingly small that you'd be more likely to win the Powerball jackpot 200 times in a row. Obviously, something other than chance is attracting trolls to this remote district. Now that folks are taking notice, some Eastern District of Texas jurists are feeling a bit defensive. Former Judge Leonard Davis, for example, recently said: "To say the Eastern District is responsible [for the patent troll problem] is to say that the Southern District of Texas is responsible for immigration problems." This is nonsense. The Southern District of Texas gets immigration cases because it sits on the U.S.-Mexico border. There is no equivalent reason for the Eastern District of Texas to be a hotbed of patent litigation. To understand why the district sees so much patent trolling, we need to look deeper. How We Got Here The Eastern District of Texas was not always so popular. In 1999, only fourteen patent cases were filed there. By 2003, the number of filings had grown to fifty-five. Ten years later, in 2013, it was 1,495. This massive rise in litigation followed the appointment of Judge T. John Ward in 1999, and his drive to create local patent rules. Judge Ward's rules, while similar to patent rules in other federal districts, had some additional plaintiff-friendly features such as a compressed discovery schedule and a short timeline to trial. This so-called "rocket docket" attracted patent plaintiffs eager to use the compressed schedule to pressure defendants to settle. For those cases that went to trial, the district got a reputation for huge patent verdicts. As one commentator explained, the Eastern District's "speed, large damage awards, outstanding win-rates, likelihood of getting to trial, and plaintiff-friendly local rules suddenly made [it] the venue of choice for patent plaintiffs." The explosion in patent litigation promptly led to a burst of new economic activity in East Texas. As the BBC wrote, Marshall is a "sleepy town kept busy with patent cases." The patent litigation boom creates business for hotels, restaurants, trial graphics services, copying, expert witnesses, jury consultants, court-appointed technical advisers, and, of course, lawyers. In other words, patent litigation has become important to the economic health of the communities surrounding the courthouse. But the federal courts don't exist to generate business for a particular region. Tipping The Scales on Both Procedure and Substance So why are these plaintiff-friendly rules so important? First, the rules impose particular burdens on defendants. If a patent case proceeds to discovery—the process whereby parties hand over information potentially relevant to the case—it will usually be more expensive in the Eastern District of Texas. This is because the local discovery order in patent cases requires parties to automatically begin producing documents before the other side even requests them. In patent troll cases, this imposes a much higher burden on defendants. Operating companies might be forced to review and disclose millions of documents while shell-company patent trolls tend to have very few documents. Trolls can exploit this imbalance to pressure defendants to settle. Second, the rules make it harder to eliminate cases early. The Supreme Court's decision in Alice v CLS Bank invalidated many of the low-quality software patents favored by patent trolls. But this only helps defendants if they are able to get a ruling to that effect from the judge overseeing their case. Judges Rodney Gilstrap and Robert Schroeder recently indicated that they would require patent defendants to ask permission before they can file a motion to dismiss raising Alice. This means that defendants in the Eastern District of Texas will more often be forced to go through expensive discovery. When judges in the Eastern District do issue rulings on challenges raising Alice, their decisions are very different from jurists in other parts of the country. Recent data from Docket Navigator analyzed all challenges under 35 USC § 101 so far this year: Nationwide: 71% granted or partially granted; 29% denied (76 decisions) Northern District of California: 82% granted or partially granted; 18% denied (11 decisions) District of Delaware: 90% granted or partially granted; 10% denied (10 decision) Eastern District of Texas: 27% granted; 73% denied (11 decisions) While each challenged patent claim is different, the overall trend suggests judges in the Eastern District of Texas are applying Alice in a way that is far more favorable to patent owners. The Alice decision, and its companion, Octane Fitness v. Icon Health & Fitness gave judges additional tools for quickly dismissing meritless patent cases and holding unscrupulous plaintiffs to account. This means that patent trolls—particularly those that bring weak cases hoping to use the cost of defense to extort a settlement—now need a favorable forum more than ever. Small wonder we've seen a spike in EDTX filings. We have also written about unfair rules that make it harder for patent defendants to file for summary judgment in the Eastern District of Texas. These rules have a real impact. A recent study found that judges in the Eastern District granted only 18% of motions for summary judgment of invalidity while the national grant rate is 31%. And that statistic, of course, does not include all the summary judgment motions that would have been filed had the defendant been given permission. Judges in the Eastern District of Texas have also harmed defendants by delaying rulings on motions to transfer (these are motions where the defendant asks for the case to be moved to a more sensible location). Delay prejudices defendants because they are stuck litigating an expensive case in a remote forum while the judge sits on the motion. (The judges' rules make clear that a pending motion to transfer or a motion to dismiss is not grounds to stay discovery in a case). The Federal Circuit recently issued a stern order (PDF) finding that an Eastern District magistrate judge had "arbitrarily refused to consider the merits" of a transfer motion. When that transfer motion was finally considered, it was granted (PDF), but not until after extensive litigation had already occurred, and requiring the parties to pay for a court-appointed technical advisor (PDF). More generally, studies have also found the Eastern District of Texas is reversed by the Federal Circuit at a higher rate compared to other districts. Venue Reform Can Fix the Mess It's time for Congress to act. Although the Federal Circuit has overruled some of the Eastern District of Texas' most egregious venue decisions, it has failed to bring basic fairness to where patent cases are litigated. We need new legislation to clarify that patent cases belong in forums with a real connection to the dispute. Fortunately, Congress is looking at the problem. Representative Darrell Issa recently offered an amendment (PDF) to the Innovation Act that would tighten venue standards in patent cases. On June 11, the House Judiciary Committee approved the amendment. If this bill becomes law, shell company patent trolls will no longer be able to drag out of state operating companies all the way to Eastern Texas. It's long past time for Congress to bring fairness to where, and how, patent cases are litigated. Contact your representative and tell them to pass the Innovation Act and to ensure that any final bill includes meaningful venue reform. Republished from the Electronic Frontier Foundation Deeplinks blog Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
As we've been exploring, whistleblowers have been exposing Putin and the Kremlin's use of "troll factories" to fill the internet with propaganda. The efforts run amazingly deep, with employees paid 40,000 to 50,000 rubles ($800 to $1,000) a month to create proxied, viable fake personas -- specifically tasked with pumping the internet full of toxic disinformation 24 hours a day. One of these employees, Lyudmila Savchuk, spent two months employed by the operation and was so disgusted that she quit, launched an anti-propaganda social activist campaign, and decided to sue the Russian government. Amazingly enough Lyudmila Savchuk is not only still alive, but she has won her case. A Russian court has awarded Savchuk symbolic damages of one ruble, her requested damage amount after suing the disinformation barn for non-payment of wages and for failing to give workers proper contracts:"I am very happy with this victory. I achieved my aim, which was to bring the internet trolls out of the shade," said Savchuk, 34. The Kremlin has claimed that it has no links to the operations of the Agency for Internet Studies. Authorities in Russia have intensified a propaganda campaign as the crisis over Ukraine has sent tensions with the west soaring to their highest level since the cold war.So yes, Savchuk managed to bring a small portion of one of Putin's companies involved in propaganda (Agency for Internet Studies, or Internet Research) out of the shadows briefly. But the Russian government continues to deny they've any connection to the operation, and the company itself continues to operate unfettered, as do the myriad other similar companies the Kremlin employs to pollute the global discourse mud puddle. Case in point: as Russia waits for the report on what caused the crash of Malaysia Airlines flight MH17 over the Ukraine last year (investigators believe the downing missile was Russian made, and the report is expected to show it was fired from territory held by pro-Russian rebels), a rather ham-fisted attempt to blame the CIA for the crash has been circulating online ahead of the report's release:"A Russian newspaper posted an audiotape on its website that purports to reveal two US spies plotting to bring down Malaysia Airlines flight MH17 over Ukraine last year. One hitch: The conversations are so stilted and oddly worded that they have been widely dismissed by native English speakers as obviously fake. "If you wanted to believe the CIA is responsible for downing MH17, now you've got the 'proof,'" the self-exiled Russian online newspaper Meduza headlined its report pointing out the awkward language used by the purported spies.The recording itself certainly sounds as if two sad actors are simply reading from a poorly-translated English script:Of course any Russian internet propagandist worth their salt will probably conclude that this ham-fisted attempt to frame the CIA was cleverly devised by the CIA itself as a sort of reverse head fake (and since the CIA has done numerous stranger things, many might even believe it). Either way, the point stands: while Savchuk may have bravely succeeded in winning one small battle against Putin's propaganda army, it's only the tiniest of dents in what's now a well-established Russian internet disinformation apparatus.Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
The MPAA has been working on a number of tricks to find a SOPA through the backdoor in the last few months -- more on some of the many attempts coming soon -- but in one attempt, it's suddenly walking away. A few weeks ago, all of the major movie studios filed a lawsuit over the website MovieTube (actually a series of websites). While it may well be that MovieTube was involved in copyright infringement (and thus a lawsuit may be perfectly appropriate), the concerning part was that, as a part of the lawsuit, the studios were demanding a remedy that is not available by law: that anyone who provides any kind of service to MovieTube be forced to stop via a court injunction. This was the kind of tool that was a part of SOPA, which (you may recall) never became law. Among the requests in the lawsuit: That the Registries and/or Registrars be required to transfer the domain names associated with Defendants’ MovieTube Websites, or any subset of these domain names specified by Plaintiffs, to a registrar to be appointed by Plaintiffs to re-register the domain names in respective Plaintiffs’ names and under Plaintiffs’ respective ownership. That content delivery networks and domain name server systems be required to cease providing services to the MovieTube Websites and/or domains identified with the MovieTube Websites and disable any access to caches they maintain for the MovieTube Websites and destroy any caches they maintain for the MovieTube Websites. That third parties providing services used in connection with any of the MovieTube Websites and/or domain names for MovieTube Websites, including without limitation, web hosting providers, cloud services providers, digital advertising service providers, search-based online advertising services (such as through paid inclusion, paid search results, sponsored search results, sponsored links, and Internet keyword advertising), domain name registration privacy protection services, providers of social media services (e.g., Facebook and Twitter), and user generated and online content services (e.g., YouTube, Flickr and Tumblr) be required to cease or disable providing such services to (i) Defendants in relation to Infringing Copies or infringement of Plaintiffs’ Marks; and/or (ii) any and all of the MovieTube Websites. A few days later, the good folks at EFF reminded everyone that SOPA did not pass, and this attempt to require a SOPA-level block is not actually what the law allows. Of course, as we noted soon after the SOPA fight, it appeared that some courts were pretending SOPA did pass, mainly in a variety of lawsuits involving counterfeit goods (rather than copyright infringement). And the movie studios rely on that in their more detailed argument in favor of this broad censorship order on third parties who aren't even a part of this case: Courts have granted similar interim relief directed to third-party service providers in cases with similar facts. The first such case, The North Face Apparel Corp. v. Fujian Sharing Import & Export Ltd. (“Fujian ”), 10-Civ-1630 (AKH) (S.D.N.Y.), was brought against defendants in China selling counterfeit goods through the Internet directly to consumers in the United States. In Fujian, the district court granted an ex parte temporary restraining order, seizure order, asset restraining order, and domain-name transfer order, later continued by a preliminary injunction order. Of course, last week, a bunch of internet companies -- Google, Facebook, Tumblr, Twitter and Yahoo -- filed an amicus brief highlighting how ridiculous the widespread demand is: Plaintiffs are asking the Court to grant a preliminary injunction not just against the named Defendants, but also against a wide array of online service providers—from search engines, to web hosts, to social networking services—and require them to “cease providing services to the MovieTube Websites and Defendants[.]” None of those providers is a party to this case, and Plaintiffs make no claim that any of them have violated the law or play any direct role in the Defendants’ allegedly infringing activities. Plaintiffs’ effort to bind the entire Internet to a sweeping preliminary injunction is impermissible. It violates basic principles of due process and oversteps the bounds of Federal Rule of Civil Procedure 65, which restricts injunctions to parties, their agents, and those who actively participate in a party’s violations. The proposed order also ignores the Digital Millennium Copyright Act (“DMCA”), which specifically limits the injunctive relief that can be imposed on online service providers in copyright cases. Even if Plaintiffs had named those providers as defendants and obtained a final judgment against them, the DMCA would not permit the relief that Plaintiffs are asking for at the outset of their case, where they have not even tried to claim that these nonparties have acted unlawfully. And... just days later, the movie studios tell the judge that they need not rule on this issue at all, and they're happy to drop the request for the preliminary injunction entirely, because the MovieTube websites have already been shut down (h/t to Eriq Gardner, who first reported on the studio's letter). We represent Plaintiffs in the above-titled action. We write to inform the Court that after Plaintiffs filed their Complaint (and presumably in response thereto), Defendants shut down their infringing websites, and as of today, such websites remain offline. Plaintiffs are no longer seeking preliminary injunctive relief at this time but will seek permanent relief as soon as possible. Defendants’ time to answer or otherwise respond is August 19, 2015. Moreover, because Plaintiffs have withdrawn their motion for preliminary injunctive relief, the arguments offered by Amici Curiae... in opposition to that motion are not ripe for consideration and are otherwise inapplicable. Accordingly, Plaintiffs have not addressed them here. To the extent Amici are requesting what amounts to an advisory opinion, such a request is improper and should not be entertained. In short: we had hoped to quietly get a court to pretend SOPA existed so we could point to it as proof that this is perfectly reasonable... but the internet folks spotted it, so we'll just walk away quietly, and hope that next time, those darn internet companies, and those eagle-eyed lawyers at the EFF aren't so quick to spot our plan.Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
Hollywood is still 100% focused on trying to blame the internet for any of its woes, mostly with bogus attacks on internet companies it doesn't like. And yet... it seems to keep on setting box office records. The latest is that Universal Pictures has broken a new record in bringing in $2 billion in box office revenue faster than any other studio in history, pushed over the top by the successful opening weekend of "Straight Outta Compton" (a movie that seems to have some big fans in Silicon Valley). Thanks to the overperformance of N.W.A biopic “Straight Outta Compton,” Universal Pictures is tracking to cross the $2 billion mark at the domestic box office on Saturday, setting a new speed record in doing so. Universal’s historic climb will break Warner Bros.’ previous record of reaching $2 billion by December 25, 2009. The studio is also extremely likely to break the record for all-time domestic box office high, which was set by WB in 2009 with $2.1 billion. That does not sound like an industry that is having a problem getting people into the theaters, even if the movies are available via infringement. But, people will argue, these services are actually harming the "home video" revenue stream. But that's questionable as well. First off, it was Hollywood that angrily fought against ever allowing a home video market in the first place (remember that?). And, more to the point, we've seen over and over again that when the industry actually adapts and offers content in a reasonable format at a reasonable fee, people will pay at home, just like they do in theaters. But, of course, due to continued Hollywood confusion and jealousy, it's still holding back lots of movies from Netflix streaming -- one successful service that has shown that it's totally possible to "compete" with infringing content. So, again, it's confusing as to what Hollywood's real complaint is. It's shown that if it makes good films, people will go out to the theaters to see them, rather than just watch them online. And if it offers them in a reasonable manner for a reasonable price online, people will pay for that as well. And yet, it doesn't do a very good job of this and then blames the internet for its own failures to adapt. Seems like a weird strategy. If I were an investor in those companies, I'd wonder why they've spent the better part of two decades so focused on "stopping piracy" rather than doing a better job delivering what the public wants.Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
One of the key realizations over the last few years, especially post-Snowden, is that there is no such thing as "just metadata". Collecting metadata is not only as bad as collecting content, it is arguably worse. Whereas content must be parsed and understood -- something that is still quite hard to do well in an automated fashion -- metadata by definition is already classified and tagged. That makes it very easy to combine with other information, and in a way that scales, to reveal extremely intimate details about the person it refers to. Techdirt has already run a couple of stories that demonstrate this. Back in 2011, the German politician Malte Spitz obtained his own phone location data, and cross-referenced it with his Twitter feeds, blog entries and other digital information to give a remarkably full picture of his daily life. More recently, Ton Siedsma went even further, allowing researchers to analyze all of the metadata generated by his phone -- with a predictably detailed picture of Siedsma emerging as a result. Now a brave reporter from the Australian Broadcasting Corporation, Will Ockenden, has requested and made available a year's worth of his outgoing call and SMS records, and six months of his data sessions on a Web page: All in all, this simple data request returned 13,000 individual records. There were 1,500 outgoing phone calls and SMSes but the vast majority -- 11,200 records -- were data sessions, complete with the time and date his phone connected to the mobile network and which cell tower it connected to. In other words, by carrying a smartphone Will was in effect carrying a tracking device that logged roughly where he was every 20 minutes of every day, on average. Government departments, police and security agencies have access to all the data Will received about himself -- and more -- without the need for a warrant. As the article points out, this exercise has a special relevance for Australians because of a new data retention law that has been brought in this year. Like many other leaders doing the same, the Australian Prime Minister tried to soothe people's fears about this manifest intrusion into their private lives using the standard "it's just metadata" argument: "We're talking here about metadata; we're not talking here about the content of communications," Prime Minister Tony Abbott said in February. "It's just the data that the system generates." What's particularly valuable about this latest provision of real-life metadata is that the public can explore for themselves how much it reveals, by playing with the interactive tools provided on the site: Over the coming days we're going to use these tools to delve deeper into Will's data and report back on what we discover. We'll be writing about what we can infer about Will, as well as how police and other agencies might use data like this. But we want your help. We're releasing these exploratory tools so you can tell us what you're able to find out about Will. You can also get the complete dataset to explore yourself. This is a great way to get across to people that there is no such thing as "just metadata". Let's hope it encourages Australians to start questioning the huge data grabs of highly personal information being carried out by their government, along with its bogus assurances on privacy. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+ Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
There's going to be a point where Moore's law stops -- because the things we build can only get so small before quantum physics starts to really mess with how circuits behave. Still, researchers keep pushing technology to make smaller and smaller devices. Molecular electronics aren't practical just yet, but the development of nanoscale components isn't completely ridiculous. Here are just a few examples. Origami and kirigami (aka origami that allows for cutting) could be useful in designing electronics from sheets of conductive materials like graphene. Flexible and bendable gadgets could be good for wearable or implantable devices, but it might take a while before graphene is ready for consumer electronics. [url] A single molecule diode made from a single symmetric molecule, an ionic solution and two gold electrodes has set a performance record, beating previous molecule-sized diodes by a factor of 50. Clearly, no one is going to be using this diode outside of a lab, but it could help design better fundamental devices with extremely small dimensions. [url] A team of scientists has created a field-effect transistor (FET) from a single molecule -- but it requires a scanning-tunneling microscope (STM) to function. This isn't the first single molecule transistor, and it probably won't be the last. However, it's still going to be tricky to find a way to make these nanoscale components useful for practical purposes. [url] After you've finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
We've all heard it before: [industry X] can't compete in the marketplace because the public just wants everything for free. It's a mantra taken up by the film industry, the recording industry, the literary industry, and the video game industry. And, almost always, we've found that the mantra is complete nonsense. Instead, it's been clear that the public is more than willing to purchase that which is scarce and valued. It's just that those scarce and valued things are often times not the content itself. The retro-gaming industry is instructive in this for two reasons. Piracy is typically much easier for retro games than modern titles. Most of the older consoles have been fully emulated at this point, with ROMs and games readily available for them online. For older PC titles, retro games often have no DRM or have been cracked so long ago that the cracked files are also readily available. In addition, retro titles aren't policed the same way that modern releases are. And, yet, despite all of that, or perhaps because of it, the retro-gaming industry is exploding. Sites like GOG.com and Steam's client offer old games with smaller pricetags. The major console-makers like Nintendo, Sony, and Microsoft all have their own marketplaces for digital downloads of retro games. Those marketplaces must be doing quite well, considering that the consoles and publishers continue to support them and expand the retro-game catalogs. And, for the actual old products, the interest and prices for retro-game pieces are skyrocketing. Giulio Graziani says it makes him feel a bit like a drug dealer, even though he's not buying anything illegal. It's part of his job digging up a steady supply of video games from the 1980s and 1990s for his store, VideoGamesNewYork, which specializes in everything from Atari and Gameboy to rare prototype NES cartridges. Graziani, 50, has been in business since 2003, but says the market only recently began to spike. "Five years ago, I could drive through Texas and stop in little towns and buy everything," he says. "Now they're selling games out there for more than I do!" Even simple pieces, like The Legend of Zelda Ocarina of Time, which cost $12 in 2010, now go for $25. More coveted games, like Nintendo's Earthbound, can fetch hundreds of dollars, even thousands if they're in the original box. It's always been this way. Collectors of art will always pay for original pieces, or for the items that go along with the actual content. If the public simply wanted everything for free when it came to gaming, anyone could go on the internet and get an emulator and a copy of The Legend of Zelda and have at it. But, of course, there are scarce items that go along with the collectables that can't be downloaded, and so the prices are paid, even as they rise. And it's not peanuts we're talking about here. Estimates for how big the retro-gaming market is come in at something like $200 million per year. For those who aren't collectors, however, there's still a reason to buy. Luckily, for a casual retro gamer, there are some cheap solutions to get a quick dose of nostalgia. Nintendo's Virtual Console allows you to download classic titles to play on the Wii U or Nintendo 3DS. The Retron 5 console by Hyperkin sells for $159.99 and supports games for 10 systems, including NES, SNES, Famicom, SENES, Genesis and Game Boy. Add to that GOG and Steam, along with the old-game marketplace Sony and Microsoft offer, and the RtB here should be clear: ease of purchase and the platform. Much like it is understood that iTunes is attractive because of the platform, rather than the music catalog that is also available via piracy, so do gamers appreciate the convenience offered by these marketplaces. Which is why they're growing and selling more and more. If anything should signal the end of the "everyone wants everything for free" myth, let it be retro-gaming. Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
More evidence has surfaced showing local law enforcement agencies are using high-powered surveillance equipment -- equipment originally designed for the military and highly-recommended by the NSA. Ali Winston and the Center for Investigative Reporting have obtained documents showing both Chicago and Los Angeles have used "dirt boxes" (DRT -- Digital Receiver Technology "boxes:" high-powered cell site simulators) since 2005. In Chicago, the Digital Receiver Technology equipment was purchased in 2005 with funds collected in asset forfeiture cases. In Los Angeles, the police department purchased a package of Digital Receiver Technology equipment with $260,000 in homeland security grant funding. The sole-source purchase was approved unanimously by the Los Angeles City Council in 2005. Both departments also use StingRays, a more commonly known cell-site simulator manufactured by the Florida-based Harris Corp. As usual, the agencies remain close-lipped about their DRTbox deployment. In fact, the LAPD simply refused to release responsive documents about its DRT equipment, in violation of the state's FOI laws. Boeing, which purchased DRT in 2008, has also refused to comment. The LAPD's refusal to hand over documents didn't stop the Center for Investigative Reporting from uncovering its DRTbox purchase, however. It found confirmation of the department's DHS grant-supported purchase in the city's public records. While much of what's been uncovered about Stingrays and other cell tower spoofers tends to focus on the collection of phone and location data, the devices are also used to intercept communications. Documents from Digital Receiver Technology clearly spell out these interception capabilities. The DRT1000 System Software is a suite of applications that includes the software that runs on the DRT system, Alaska (GUI), Applause (Audio Playback), DF and Geolocation (with DRTview), and Yukon (IP Configuration utility). The DRT1000 system may be used to: identify and collect audio, data and Signal Related Information (SRI). The company also manufactures auxiliary products that capture "push-to-talk" communications, something that has been used in the past to elude law enforcement surveillance efforts. While very few law enforcement agencies have produced documents indicating the purchase and use of these "Stingrays on steroids," DRT's own marketing materials proudly note the company has sold more than 5,000 units as of May 2015. And it is indeed bulked-up. The Los Angeles city clerk's notes on the LAPD's purchase states that DRT's products offer more capabilities than its competitors. The LAPD has done extensive research to determine the best product available for tracking cellular telephone calls. Most vendors only offered a product that could track phones issued by one or two cellular service providers. Conversely, DRT's equipment can track and monitor all cellular phone traffic, making it the most advanced product on the market. To match the capability of DRT's product, the LAPD would be required to purchase several individual systems from other vendors at a far greater total cost. This increased capability comes at a premium price, but it's not as though local police departments will be expected to pick up the tab. Funding from the FY04 State Homeland Security Grant Program (Council File 04-2499), in the amount of $260,000, was previously allocated for the purchase of a Cell Phone Tracking System (CPTS). Chicago's route to greater surveillance capabilities ran through its asset forfeiture program. In addition to its DRT purchases, the 193 pages of responsive documents show multiple purchases from Harris, including multiple upgrades to existing equipment. As is the case nearly everywhere, the acquisitions were done in secret and usually without a true bidding process -- a very popular option for departments acquiring surveillance technology. No bidding process means no generation of additional FOIA-able paperwork. Not found among the responsive documents was anything detailing policies regarding use of these powerful devices or any examination of potential privacy implications by the agencies seeking the equipment. Despite almost-daily revelations of cell tower spoofer acquisition and use (via FOIA requests, mostly), law enforcement agencies are still willing to drop cases and dismiss evidence rather than allow additional public scrutiny. The use of these tools also continues to be hidden behind pen register requests and parallel construction. The good news is that the continued revelations have finally caught the eye of the nation's legislators. U.S. Rep. Alan Grayson, D-Florida, who is on the House Foreign Affairs and Science, Space and Technology committees, said the use of technology with eavesdropping capabilities without a warrant is “a clear violation of the Fourth Amendment.” By using such technology without informing judges of the full capabilities of these devices, Grayson said, law enforcement officials are exposing their casework to appeals under the exclusionary rule, which mandates that evidence gathered illegally cannot be used in court proceedings. “They’re essentially messing up every case they use these devices in,” he said. Whether or not this additional attention will result in Fourth Amendment-friendly legislation remains to be seen. Strongly-worded statements are great, but it will take a real, concerted effort to create a warrant requirement, much less restore some expectation of privacy to "business records" like connection and location info. Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
The recent Snowden documents published by the New York Times and ProPublica confirm the close relationship between AT&T and the NSA, which would explain the deafening silence the company issued in response to the first few months of leaks. (Such "partnerships" likely exist with Verizon and other providers, although nothing has been directly confirmed by leaked documents, and such partnerships may have done a bit of "dissolving" shortly after the leaks began.) But the documents also highlighted the difference between what NSA officials claimed they were getting and what they were actually getting from telcos. In early 2014, officials claimed they were only obtaining between 20-30% of domestic call records. According to unnamed "current and former officials," the explosion of cellphone use was leaving the NSA's bulk collection programs in the dust. These documents show there's a huge discrepancy between the unofficial "official" statements and the NSA's actual capabilities, as the ACLU's Kade Crockford points out. The New York Times reports on documents disclosed by former NSA contractor and whistleblower Edward Snowden: "In 2011, AT&T began handing over 1.1 billion domestic cellphone calling records a day to the N.S.A. after “a push to get this flow operational prior to the 10th anniversary of 9/11,” according to an internal agency newsletter. This revelation is striking because after Mr. Snowden disclosed the program of collecting the records of Americans’ phone calls, intelligence officials told reporters that, for technical reasons, it consisted mostly of landline phone records." I must quibble a bit with the New York Times excellent reporting here, only to suggest that what's "striking" about the discrepancy between what journalists reported and the truth isn't the fact that the NSA would lie to journalists. What's striking is that journalists continue to print official, often anonymous claims about government surveillance programs without a shred of evidence that those claims are true. This has long been the problem with journalists' reliance on "unnamed government sources." Without a name to attach the statement to, no one can be held accountable for lying to the American public. While many sources would not comment at all without the protection of anonymity, the statements issued are seldom questioned by entities with an obligation to challenge anonymous assertions. On the other hand, emptywheel's Marcy Wheeler points out these claims may have an element of truth, but only because Snowden's leaks and problematic cell location data (which isn't specifically covered by Section 215) was preventing the agency from collecting all it wanted to, or perhaps all it used to. We know from the Congressional notice AT&T was willing to strip [location data]. For a lot of reasons, it’s likely Verizon was unwilling to strip it. This is one of the possible explanations I’ve posited for why NSA wasn’t getting cell data from Verizon, because any provider is only obliged to give business records they already have on hand, and it would be fairly easy to claim stripping the cell location data made it a new business record. Which is another important piece of evidence for the case made against AT&T in the story. They were willing to play with records they were handing over to the government in ways not required by the law. Though who knows if that remain(ed) the case? To get to the 30% figure quoted in all the pieces claiming NSA wasn’t getting cell data, you’d probably have to have AT&T excluded as well. So maybe after the Snowden releases, they, too, refused to do things they weren’t required to do by law... So, it could very well be that the NSA wasn't getting all the cell records it wanted to, making these claims mostly factual. But the Wall Street Journal and New York Times both attribute quotes to "former officials" as well, which would include officials in place prior to the Snowden leaks. If the leaks resulted in a sudden reluctance to provide these records, it would only have occurred after June 2013. The questionable statements were made in the first few months of 2014. That's not much of a gap, especially if former officials are speaking from their personal experience. Maybe the NSA was only getting 20-30% of what it was seeking. Or maybe it only wanted to give that impression. Nothing about the statements reflect a sudden downturn in collections. Instead, they portray it as an ongoing problem. The underlying issue is the secrecy of surveillance programs, which leads to statements that can't be confirmed (or refuted) until evidence is provided. And these officials aren't going to hand out this info. It has to come from whistleblowers and leakers. The government can skate by on lies and misperceptions, especially when national security is involved. Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
As we recently covered, ProPublica (in conjunction with the New York Times) published another set of documents exposing AT&T's long-running position as Alfred to the NSA's bulk collection Batman. The documents contained glowing quotes from various NSA operatives and officials touting the telco's subservience. “highly collaborative!” "extreme willingness to help!" “This is a partnership, not a contractual relationship.” "...access to massive amounts of data!" For a company not exactly famous for its customer service, AT&T is probably eyeballing these glowing pull quotes and trying to figure out how to spin them into positive PR. Not only are many AT&T customers unhappy with their provider, but the rest of the government is less than impressed with AT&T's actions. The FCC recently decided to start doing its job re: AT&T's routine abuse of its customers. It went after the telco giant for turning a blind eye to rampant fraudulent abuse of an IP relay service set up to assist the hearing impaired (yes, the metaphor is clunky) -- something that had gotten so bad it was estimated the program's relay traffic was about 5% legitimate service and 95% scammers. Late last year, the FCC also cracked down on AT&T for its symbiotic relationship with shady services which offered "premium" garbageware that was billed monthly to unaware cell phone users for indefinite periods of time. AT&T was in no hurry to end this, despite customer complaints, because it netted about 35% of the total haul. And in May of this year, AT&T settled with the FCC for misappropriating federal funding meant to provide phone service to low-income households. [T]he FCC has announced that it has struck a settlement with AT&T and former subsidiary SNET, over charges the companies were collecting undeserved subsidies under the agency's "Lifeline" program, a low-income community subsidy effort created by the Reagan administration in 1985 and expanded by Bush in 2005. According to the FCC's findings, AT&T apparently "forgot" to audit its Lifeline subscriber rolls and purge them of non-existent or no-longer-eligible customers, allowing it to continue taking taxpayer money from a fund intended to aid the poor. And this is only what the FCC has actually addressed over the past few years. AT&T's sketchy behavior traces back to well before its national security obeisance was a twinkle in the intelligence community's eye. So, the agency tasked with national security claims its favorite "partner" is a scammy, bloated, abusive corporation. It makes a certain amount of sense. The NSA doesn't care how badly AT&T treats everyone else, just as long as it still makes feeding the agency data and communications one of its top priorities. And as for AT&T's apparent lack of a functioning spinal column, it turns out there's really only one backbone that matters in the surveillance world: the one that "belongs" to the internet. Permalink | Comments | Email This Story

Read More...
posted 17 days ago on techdirt
Late in 2013, a few like-minded individuals decided Yelp owed them money for all the reviews they had voluntarily written over the years. The class action suit bascially alleged Yelp's machinery was lubricated with the blood of unpaid reviewers. The original filing threw off the shackles of normal, dry legal prose in favor of phrases like "thumbing its nose at workers… and taxing authorities," "dependent on a horde of non-wage-paid reviewers," "system of cult-like rewards and disciplines," and "a 21st-century galley slave ship with pirates banging the drums." All in all, the original complaint reads more like a screed against content farming than a list of recognizable torts. Somehow, it has managed to survive a venue shift and the ineptitude of the plaintiffs' counsel, as well as a complete dismissal. But only barely. And with the threat of Anti-SLAPP fees looming over it. In the opinion granting the smallest of leeway to the complaining party, the judge first notes that not only are they supposed to state coherent claims, but they're also supposed to explain why they're entitled to relief. The court has some problems with the claims themselves. Here, each of the three named plaintiffs alleges that he or she “was hired by Yelp, Inc. as a writer and she fulfilled that job description and job functions.” Each plaintiff allegedly was “directed how to write reviews and given other such employee type direction from employer defendant.” Yelp allegedly controlled each plaintiff’s “work schedule and conditions.” Two of the three plaintiffs are alleged to have been “fired” with “no warning [and] a flimsy explanation.” Clearly a violation of labor laws... except for the part where the plaintiffs wholly misrepresented the actual situation. A reasonable inference to be drawn from the complaint, and from plaintiffs’ arguments, is plaintiffs use the term “hired” to refer to a process by which any member of the public can sign up for an account on the Yelp website and submit reviews, and the term “fired” to refer to having their accounts involuntarily closed, presumably for conduct that Yelp contends breached its terms of service agreement. A further reasonable inference is that plaintiffs and the putative class members may contribute reviews under circumstances that either cannot be reasonably characterized as performing a service to Yelp at all, or that at most would constitute acts of volunteerism. As such, the labor law cited by the plaintiffs is inapplicable, since it does not cover voluntary acts, even if said acts are monetized by the defendant. That being said, the judge doesn't completely close the door on this particular claim. He notes that it may be possible (but highly unlikely) that a recognizable labor law claim can be raised from this mess of a lawsuit and has given the plaintiffs 20 days to file an amended complaint. Yelp asked the court for damages under California's Anti-SLAPP law, claiming the lawsuit was nothing more than an attempt to chill its speech, seeing as the claims themselves were baseless and publishing reviews is (duh) a protected form of speech. The court has some sympathy for this argument, partially because the plaintiffs were unable to respond coherently to Yelp's Anti-SLAPP motion. Plaintiffs offer a strident argument, marked by ad hominem attacks on counsel, that the anti-SLAPP statute is inapplicable because they merely seek to hold Yelp liable under quasi-contractual theories for non-payment of wages, not for exercising any free speech rights. A plaintiff’s theory of liability, however, is not relevant to the question of whether the claim “arises from” a defendant’s exercise of free speech rights. As alleged, Yelp publishes information— reviews—regarding the services and goods various business establishments offer to the public. Plaintiff’s claims plainly arise from that conduct, which undisputedly involves speech on matters of public interest. Unfortunately, the ad hominem attack on Yelp that was filed instead of a proper response is buried somewhere in the mess of a docket -- something that has less to do with a venue move and the consolidation of filings, and more to do with the plaintiffs' counsel being mostly incompetent. (A read through other filings makes it clear the counsel humored the plaintiffs' desire to be cast as warriors railing against an unjust system, rather than steer them towards articulable arguments and claims.) Both before and after the transfer of this action to this district, plaintiffs have filed various pleadings with confusing titles in the captions and/or garbled descriptions and/or incorrect event types in the electronic docket entries. As a result, the court in the Central District issued several orders striking certain of plaintiffs’ submissions. Complaining that plaintiffs’ filings in this district failed to comply with the local rules on notice, and that various ambiguities now exist as to what motions are pending and as to the briefing schedules, defendant seeks “administrative relief” under Local Rule 7-11. Defendant requests an order vacating the current hearing, and postponing at least one of plaintiffs’ motions until after a case management conference has been held. [...] More problematically, plaintiffs failed to set the hearing in the ECF system, with the result that no briefing schedule appeared in the docket entry. It is counsel’s responsibility to learn to use the ECF system correctly, and he is hereby directed to undertake appropriate steps to ensure that he and/or his office staff can file and docket pleadings in the proper manner. Whether or not this sort of ineptitude led to the plaintiffs' counsel being disciplined by the State Bar isn't made explicit, but a later motion by the plaintiffs attempted to obtain an injunction against the Bar's proceedings as well as add it (!) as a defendant. This was denied. Whatever the case, the judge finds both prongs of the Anti-SLAPP test have been met (the second being the failure to state a viable claim), but says it's still too early to consider the question of awarding fees. The plaintiffs have also asked for sanctions to be awarded against Yelp, but the judge notes he's doing most of the work in discerning what it is the plaintiffs are actually seeking. Plaintiffs seek sanctions against Yelp and its counsel. Although the precise basis of plaintiffs’ complaints and the scope of the relief they seek are difficult to discern from their rambling and invective-filled papers, the gist appears to be a contention that Yelp did not exercise good faith in connection with a court-ordered mediation session, and that it should therefore be placed into “default” status until it funds and participates in a new mediation session, and pays monetary penalties to the court. Plaintiffs have failed to make a persuasive showing that Yelp or its counsel engaged in sanctionable misconduct, or that the relief they seek would be warranted in any event. The motion for sanctions is denied. The only remaining option for the plaintiffs at this point is to file a coherent set of claims that might have a chance at surviving more than a cursory examination. The plaintiffs may have entered the legal arena with every intention of extracting paychecks for their voluntary efforts, but given this opinion, the likelihood of them losing both the case and their money seems much more probable. Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
Over the years, Techdirt has had a couple of stories about misguided chefs who think that people taking photos of their food are "stealing" something -- their culinary soul, perhaps. According to an article in the newspaper Die Welt, it seems that this is not just a matter of opinion in Germany, but established law (original in German): In individual cases, shared pictures may be illegal. At worst, a copyright warning notice might come fluttering to the social media user. For carefully-arranged food in a famous restaurant, the cook is regarded as the creator of a work. Before it can be made public on Facebook & Co., permission must first be asked of the master chef. Apparently, this situation goes back to a German court judgment from 2013, which widened copyright law to include the applied arts too. As a result, the threshold for copyrightability was lowered considerably, with the practical consequence that it was easier for chefs to sue those who posted photographs of their creations without permission. The Die Welt article notes that this ban can apply even to manifestly unartistic piles of food dumped unceremoniously on a plate if a restaurant owner puts up a notice refusing permission for photos to be taken of its food. It's sad to see this kind of ownership mentality has been accepted by the German courts. As a Techdirt article from 2010 explained, there's plenty of evidence that it is precisely the lack of copyright in food that has led to continuing innovation -- just as it has in other fields that manage to survive without this particular intellectual monopoly, notably in fashion. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+ Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
Give your IT skills a boost and grab the CompTIA IT Certification Bundle for 95% off in the Techdirt Deals store. The four courses include instructions on how to install & configure hardware, how to set up network connectivity and email for Android and iOS, and how to master security, safety and environmental issues. Once you've completed the courses, you'll be ready to take the CompTIA A+ certification exam and be well on your way to becoming an IT expert. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
For years the broadband industry tried to claim that they were imposing usage caps because of network congestion. In reality they've long lusted after usage caps for two simple reasons: they allow ISPs to charge more money for the same product, and they help cushion traditional TV revenues from the ongoing assault from Internet video. Instead of admitting that, big ISPs have tried to argue that caps are about "fairness," or that they're essential lest the Internet collapse from uncontrolled congestion (remember the debunked Exaflood?). Over the years, data has shown that caps aren't really an effective way to target network congestion anyway, can hinder innovation, hurt competitors, and usually only wind up confusing consumers, many of whom aren't even sure what a gigabyte is. Eventually, even cable lobbyists had to admit broadband caps weren't really about congestion, even though they still cling to the false narrative that layering steep rate hikes and overage fees on top of already-expensive flat-rate pricing is somehow about "fairness." Comcast is of course slowly but surely expanding usage caps into its least competitive markets. More recently the company has tried to deny it even has caps, instead insisting these limits are "data thresholds" or "flexible data consumption plans." But when asked last week why Comcast's caps in these markets remain so low in proportion to rising Comcast speeds (and prices), Comcast engineer and vice president of Internet services Jason Livingood candidly admitted on Twitter that the decision to impose caps was a business one, not one dictated by network engineering:Jason's not the first engineer to admit that caps aren't an engineering issue and therefore don't have anything to do with congestion. In fact if you followed the broadband industry's bunk Exaflood claims over the last decade, you probably noticed that ISP lobbyists say one thing (largely to scare legislators or the press into supporting bad policy), while actual engineers say something starkly different. Repeatedly we've been told by ISP lobbyists and lawyers that if ISPs don't get "X" (no net neutrality rules, deregulation, more subsidies, the right to impose arbitrary new tolls, whatever), the Internet will choke on itself and grind to a halt. In contrast, the actual people building and maintaining these networks have stated time and time again that nearly all congestion issues can be resolved with modest upgrades and intelligent engineering. The congestion bogeyman is a useful idiot, but he's constructed largely of bullshit and brainless ballast. Livingood will likely receive a scolding for wandering off script. Comcast, unsurprisingly, doesn't much want to talk about the comment further:"We've asked Comcast officials if there are any technology benefits from imposing the caps or technology reasons for the specific limits chosen but haven't heard back yet. Livingood's statement probably won't come as any surprise to critics of data caps who argue that the limits raise prices and prevent people from making full use of the Internet without actually preventing congestion."That's worth remembering the next time Comcast tries to insist that its attempt to charge more for the same service is based on engineering necessity. The problem? Our shiny new net neutrality rules don't really cover or restrict usage caps, even in instances when they're clearly being used to simply take advantage of less competitive markets. While Tom Wheeler did give Verizon a wrist slap last year for using the congestion bogeyman and throttling to simply make an extra buck, the FCC has generally been quiet on the implementation (and abuse) of usage caps specifically and high broadband prices in general. There are some indications that the FCC is watching usage caps carefully, and says it will tackle complaints about them on a "case by case basis." But what that means from an agency that has traditionally treated caps as "creative" pricing isn't clear. It's another example of how our net neutrality rules were good, but serious competition in the U.S. broadband sector would have been better.Permalink | Comments | Email This Story

Read More...
posted 18 days ago on techdirt
There's something... weird about American publications, which regularly rely on the First Amendment, to argue against those very freedoms. Obviously, part of the joys of free speech is that of course they're allowed to express opinions on why we should have less free speech... but it's still odd. The latest entrant is from the New Yorker, which has a long piece by Kelefa Sanneh, which supposedly takes a look at the "new battles over free speech" and raises some of the usual concerns these days about how there have been a number of high profile (and low profile) situations recently where people have used their free speech abilities to demand that others, with views they disagree with, be silenced. There are reasonable and potentially interesting debates and discussions to be had around these issues, and how some have really decided that "free speech" can't somehow include any form of "speech we don't like" -- as ridiculous as that concept seems to many of us. However, Sanneh's piece is none of that. It focuses mostly on two recent books, both of which argue that "the left" is looking to stamp out free speech (it's the whole "political correctness debate" warmed over yet again). But, the article itself is oddly... devoid of any actual discussion on free speech, why it's important, or any actual free speech experts. You would think they'd at least check in with a few. But without that, the piece is chock full of just downright false claims. The good folks at FIRE (the Foundation for Individual Rights in Education) have done a nice takedown of the piece (yay, counter-speech!) discussing ten different things that the New Yorker gets wrong in the piece (over two separate posts), but I wanted to focus on one of the stranger arguments made in the article -- that appears to slam "free speech extremists" as if they're crazy and have no rational basis. Speech nuts, like gun nuts, have amassed plenty of arguments, but they—we—are driven, too, by a shared sensibility that can seem irrational by European standards. And, just as good-faith gun-rights advocates don’t pretend that every gun owner is a third-generation hunter, free-speech advocates need not pretend that every provocative utterance is a valuable contribution to a robust debate, or that it is impossible to make any distinctions between various kinds of speech. In the case of online harassment, that instinctive preference for “free speech” may already be shaping the kinds of discussions we have, possibly by discouraging the participation of women, racial and sexual minorities, and anyone else likely to be singled out for ad-hominem abuse. Some kinds of free speech really can be harmful, and people who want to defend it anyway should be willing to say so. Except, nearly everything said there about free speech "nuts" is wrong. Many are more than willing to admit that much of what they defend has absolutely no valuable contribution to a robust debate. But that's the point. Defending free speech is about recognizing that there will be plenty of value-less speech, but that you need to allow such speech in order to get the additional valuable speech. And, contrary to the claims in the article (note the lack of quotes to support the point), plenty of free speech advocates are quite reasonably worried about the ways in which certain kinds of discussions may be "discouraging the participation of women, racial and sexual minorities." Hell, Sarah Jeong just wrote a whole book about this. Or how about the Dangerous Speech Project, that specifically looks at how some speech can lead to violence, but still looks at it from a free speech perspective. Pretending no one even considers these things is simply wrong. You would think that the New Yorker, with its vaunted "fact checking" department, would have at least looked at these things. The problem is that you can recognize how some speech may discourage other speech and then not immediately leap to saying censorship must be the answer. It is entirely possible to say that there is some kinds of speech you find problematic, but then look for other ways to deal with it -- such as with counter speech, or with technology choices that can minimize the impact -- that don't involve taking away the right to free expression. The really ridiculous point underlying all of this is this idea that the best response to speech we don't like -- or even speech that incites danger or violence -- is censorship. That is rarely proven true -- and (more importantly) only opens everyone else up to risks when people in power suddenly decide that your speech is no longer appropriate either. Totally contrary to what Sanneh claims in the article, free speech "nuts" don't believe that all speech is valuable to the debate. We just recognize that the second you allow someone in power to determine which speech is and isn't valuable, you inevitably end up with oppressive and coercive results. And that is a real problem.Permalink | Comments | Email This Story

Read More...