posted 12 days ago on techdirt
A brief update on how the Internet Slowdown Day effort went, in case you missed it. Tons of sites jumped on board, including my favorite, Clickhole (The Onion's lovingly wonderful attempt to satirize clickbait sites), which showed off images of koalas that refused to load to in an effort to call for net neutrality... With the effort on so many different websites, reports are that, at its peak, there were over a thousand calls per minute going into the Congressional switchboard, which is a huge deal. Many in Congress are indicating that their offices are getting swamped with calls. The main act is over at the FCC, but not in Congress (mainly because Congress has no desire to get anywhere near reforming the Telecommunications Act, as it should). However, a big part of the issue at the FCC is the political fight that it will set off no matter what decision it eventually comes to. The more Congress realizes that the public really supports an open and neutral internet, the more likely it is that the FCC will have the political cover to do what's right. This is good news. Again, if you're still trying to understand all this net neutrality stuff, we've got a big primer to check out.Permalink | Comments | Email This Story

Read More...
posted 12 days ago on techdirt
A brief update on how the Internet Slowdown Day effort went, in case you missed it. Tons of sites jumped on board, including my favorite, Clickhole (The Onion's lovingly wonderful attempt to satirize clickbait sites), which showed off images of koalas that refused to load to in an effort to call for net neutrality... With the effort on so many different websites, reports are that, at its peak, there were over a thousand calls per minute going into the Congressional switchboard, which is a huge deal. Many in Congress are indicating that their offices are getting swamped with calls. The main act is over at the FCC, but not in Congress (mainly because Congress has no desire to get anywhere near reforming the Telecommunications Act, as it should). However, a big part of the issue at the FCC is the political fight that it will set off no matter what decision it eventually comes to. The more Congress realizes that the public really supports an open and neutral internet, the more likely it is that the FCC will have the political cover to do what's right. This is good news. Again, if you're still trying to understand all this net neutrality stuff, we've got a big primer to check out.Permalink | Comments | Email This Story

Read More...
posted 12 days ago on techdirt
Water would be a great fuel -- if only there were thermodynamically-possible ways to extract energy from it. Water is a pretty stable compound, and it's difficult to retrieve the energy required to break its bonds. Electrolysis can break water into hydrogen and oxygen, but burning the hydrogen doesn't produce a net gain of energy. But there may be some creative ways around this problem, and some folks have actually made progress in using water (or saltwater) in an energy-generating system. The US Naval Research Lab has developed a prototype system that extracts CO2 and H2 (carbon dioxide and hydrogen) from seawater simultaneously, then combines these gases to a liquid hydrocarbon fuel. A gas-to-liquids (GTL) synthesis process like this could help ships run longer without re-fueling. [url] In 1935, Charles H. Garrett claimed to have invented an engine that used only water as fuel, and he patented his invention the same year. The key to this engine was an electrolytic carburetor -- which is basically a flux capacitor -- and as soon as it hit 88 mph, it traveled into the future and its technology was lost. [url] Graphene can generate small amounts of electricity when saltwater flows over it. The trick will be how to produce enough electricity (and enough graphene) in an economical way so that this is a practical means of generating energy. [url] If you'd like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.Permalink | Comments | Email This Story

Read More...
posted 12 days ago on techdirt
Today, as you may have heard, is Internet Slowdown Day, in which a bunch of folks are calling attention to the fight at the FCC concerning net neutrality. The basic idea -- as you may have seen on this very site -- is to host some "spinning wheel" banners, highlighting the kind of internet that we may have to live with if the big broadband providers get their way and are allowed to set up tollbooths online, picking winners and losers based on who will pay the most. We've been hearing that the big broadband players are a bit nervous about this -- as often seems to happen when it comes to real grassroots efforts. They've attempted to set up some fake grassroots efforts. We've even heard rumors that they've been trying to "infiltrate" planning meetings for Internet Slowdown Day. But this one takes the cake. In response to this campaign, cable's main lobbying arm, NCTA, has launched an advertising campaign that... um... looks kinda like the Internet Slowdown Day campaign, reminding people that they're nervous about Big Cable cutting off access. Here are two of the ads NCTA is currently running: Of course, if you look at those ads, they actually (1) look like they're a part of Internet Slowdown Day and (2) remind people of exactly what they fear most about Big Cable: the inability to connect to certain sites. No one (and I do mean no one) thinks that, if the FCC implements true open internet rules, they'll suddenly be "unable to connect" to any particular sites. The only place where that's a fear is if the FCC doesn't put in place good rules and allows companies, like the cable companies NCTA represents, to start blocking access to certain sites. So, either this a case where some ad designer at NCTA is a subversive double agent really helping "Team Internet," or the folks at NCTA and Big Cable are really so buried in their own wonkdom, they don't realize just how much this ad appears to support the other team. Either way, thanks, Big Cable and your lobbyists for highlighting exactly what most of us fear. An internet where we are "unable to connect" to sites because the FCC has killed off net neutrality...Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
Back in 2004, when I first read the book Innovation and Its Discontents, I was convinced that the Court of Appeals for the Federal Circuit, better known as CAFC, or the "patent appeals court" was a huge part of the problem with the patent system. It was the special court that had been set up in the early 80s to handle all patent appeals, based on the totally misplaced notion that because patent issues were so technical, regular appeals courts wouldn't be able to handle the nuances. What we got instead was a court that became "patent specialists" in that they spent much of their time with the patent bar -- who tended to be lawyers who profited handsomely from an ever expanding patent law. It didn't help that one of the original CAFC judges was Giles Rich, a former patent attorney who almost single-handedly wrote the Patent Act of 1951. Rich more or less made it his lifetime goal to expand the patent system to cover "everything under the sun made by man," and he came close to succeeding. In fact, some important research pointed out that the structure of the court means that it's really designed to only expand what is patentable, and never to contract it. Two years ago, Tim Lee had a fantastic expose on CAFC and how it turned patents into a megabusiness by expanding them massively (often ignoring the Supreme Court to do so) and had become way too chummy with the patent bar. It appears that others are catching on as well. Over at the Cato Institute, Eli Dourado has a good article discussing just how "the patent bar captured a court and shrank the intellectual commons." It's a good read, going back over much of the territory that Lee and others have covered previously, but doing it in a nice succinct fashion. It also has a nice empirical summary of just how broadly the CAFC expanded patent law: The creation of the court has significantly altered the law. Using a dataset of district and appellate patent decisions for the years 1953–2002, economists Matthew Henry and John Turner find that the Federal Circuit has been significantly more permissive with respect to affirming the validity of patents. They estimate that patentees are three times more likely to win on appeal after a district court ruling of invalidity in the post-1982 era. In addition, following the precedents set by the Federal Circuit, district courts have been 50 percent less likely to find a patent invalid in the first place, and patentees have become 25 percent more likely to appeal a decision of invalidity. With patents more likely to be upheld in the Federal Circuit era, the incentive to patent has increased. Bronwyn Hall finds a highly significant structural break in patent applications occurring between 1983 and 1984. The number of patents granted by the U.S. Patent and Trademark Office also increased, from 63,005 in 1982 to 275,966 in 2012—a quadrupling of the rate in only 30 years. This is important, in part, because one of the suggestions that's been floated to "fix" the problems of the patent system is actually to create another specialized patent court, this time at the district court level, with the claim being that this would stop things like the rush to bring patent lawsuits in east Texas, or unsophisticated juries deciding big patent cases. Except that, as we've pointed out, this would just exacerbate the problem. Dourado's article -- as did Lee's -- quotes the astounding blog post by patent attorney (and unabashed patent system cheerleader) Gene Quinn after the Supreme Court struck down medical diagnostic patents in the Promethus v. Mayo Labs case (a precursor to later striking down or massively limiting gene and software patents), in which Quinn happily awaits CAFC "overruling" the Supreme Court: How long will it take the Federal Circuit to overrule this inexplicable nonsense? The novice reader may find that question to be ignorant, since the Supreme Court is the highest court of the United States. Those well acquainted with the industry know that the Supreme Court is not the final word on patentability, and while the claims at issue in this particular case are unfortunately lost, the Federal Circuit will work to moderate (and eventually overturn) this embarrassing display by the Supreme Court. There is, of course, some hope that maybe things are actually changing. The number of cases in which the Supreme Court has smacked down the CAFC grow each year, and in some of the more recent ones the Supreme Court's impatience with CAFC and its inability to properly interpret patent law have become clear. On top of that, CAFC is under new management, due to an ethics scandal involving the former chief judge. And the most recent few decisions have suggested that, perhaps, finally, CAFC is changing and getting the message. That said, it still makes no sense at all to have a specialized court like CAFC for patent appeals. Like nearly all other kinds of cases, patent lawsuits should go up through the circuit courts. There, those courts will be more likely to actually listen to the Supreme Court, rather than think they can "overrule" the Supreme Court -- and on difficult cases there's more likely to be a circuit split where different opinions are discussed. And, most importantly, it means that the patent bar can't so aggressively lobby just a small group of "friendly" judges that they see over and over and over again.Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
FCC boss Tom Wheeler is continuing to talk the talk concerning actually putting in place more meaningful rules to protect consumers from telco/internet gatekeepers. The big question is still whether he'll get around to walking the walk. Last week, he gave a strong speech on the lack of real competition in broadband. And this week, he gave a talk at the big CTIA (mobile operators') trade show in which he didn't do the usual suck-up to the industry that we've come to expect in the past, but suggested that the industry needs to shape up. While it starts out with some rhetoric about how he's there to "aggressively represent the best interests of his client," who he notes is "the American people," he does call out some questionable practices by the carriers, including hinting that the net neutrality / open internet rules may be the tool he uses against them: Recently, I sent letters to the four national wireless providers, asking them about their network management practices. We are very concerned about the possibility that some customers are being singled out for disparate treatment even though they have paid for the capacity that is being throttled. And we are equally concerned that customers may have been led to purchase devices relying on the promise of unlimited usage only to discover, after the device purchase, that they are subject to throttling. I am hard pressed to understand how either practice, much less the two together, could be a reasonable way to manage a network. Our Open Internet proceeding will look closely at both the question of what is “reasonable” and the related subject of how network management practices can be transparent to consumers and edge providers. One of the big loopholes of the original rules was that they didn't apply to wireless at all, which was part of the reason why companies like Verizon started focusing more on wireless instead of wired broadband. Wheeler notes that the proposed rules keep it that way, but hints about changing it, given that the landscape is changing: As evidenced by the growth in this industry over the past decade, mobile wireless broadband is a key component of that virtuous cycle.... One of the constant themes on the record is how consumers increasingly rely on mobile broadband as an important pathway to access the Internet. Microsoft, for instance, told the Commission that because we live in what they called a “mobile first” world, “There is no question that mobile broadband access services must be subject to the same legal framework as fixed broadband access services.” Thousands of consumers have echoed that sentiment. The Commission’s previous Open Internet rules distinguished between fixed and mobile, and our tentative conclusion in this new rulemaking suggested the Commission should maintain the same approach going forward. In this proceeding, however, we specifically recognized that there have been significant changes in the mobile marketplace since 2010. We sought comment about whether these changes should lead us to revise our treatment of mobile broadband services. The basic issue that is raised is whether the old assumptions upon which the 2010 rules were based match new realities. It's also nice to see that he's speaking up for competition, and suggests he doesn't believe consolidation is good for consumers: This industry has always told policy makers, “We’re different, we’re competitive.” But in the last couple of years the FCC and the Department of Justice have had to be poised to intervene to protect that dynamic. First it was AT&T’s proposed acquisition of T-Mobile. Most recently the Assistant Attorney General for Antitrust and I were outspoken in discouraging Sprint’s potential acquisition of T-Mobile. The American consumer has been the beneficiary: new pricing and new services that have been spurred by competition. I know that achieving scale is good economics, and that there is a natural economic incentive to accrue ever-expanding scale. We will continue to be skeptical of efforts to achieve scale through the consolidation of major players. He further notes that the mobile world shows that when there is more competition, investment follows, which counters the big telco/broadband claims that consolidation and less competition will lead to greater investment. The mobile industry has proven that competition drives capital investment. Equally important, you have shown that competition and investment are not mutually exclusive. In the past 10 years, the mobile industry has invested $260 billion to build competitive infrastructure And getting back to the issue of net neutrality, he notes that competition alone doesn't appear to be enough to ensure an open internet where the operators aren't picking winners and losers: One of the great facilitators of competition for online services is the open design of the Internet. I remember when this industry was united around the walled garden where the only apps that reached the consumer were those which the carrier approved, usually in return for a payment. That wasn’t a good environment for innovation, or the expansion of consumer services, or the industry for that matter. The fast pace of technology which we have been discussing effectively destroyed those walls. Once the world went IP it was possible to leap the garden wall and discover the abundance of an open ecosystem. And just look at the results! But it is instructive that the walled garden existed despite multi-carrier competition. At least in the short run, this suggests that competition does not assure openness. This is great to see, as it's much more typical of the FCC boss at such events to pander to the audience. Wheeler doesn't do that at all. If anything, this speech is him giving them a pretty big warning shot (he also does this on the issue of spectrum auctions, but this post is getting long enough...). Again, it's nice (and somewhat refreshing) to see Wheeler saying these kinds of things. In fact, he's been saying a lot of the right things over the past few months. The real question is if the actions will follow the words. Having seen an FCC that has failed to follow through for so many years, it pays to be skeptical until we see actual results. But, as a starting point, saying the right things is better than the opposite.Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
One of the most frustrating aspects of the copyright industry's insistence on pushing for harsher measures to reduce the number of illegal downloads is that we know it's simply unnecessary. As Techdirt has reported, there is mounting evidence that the best way to reduce piracy is to offer good legal alternatives. TorrentFreak has news of another data point supporting this idea: In 2012 the streaming service [Spotify] entered the Australian market and Spotify's own research now shows that music piracy via BitTorrent dropped significantly during the following year. In a keynote speech at the BIGSOUND music conference today, Spotify's Director of Economics Will Page reveals that the volume of music piracy has decreased 20% between 2012 and 2013. Similarly, the number of people sharing music via BitTorrent in Australia has gone down too. Two important caveats are needed here. First, that this is research commissioned by Spotify, and therefore it might be regarded as suspect for that reason. However, it is likely that the Australian recording industry is also monitoring this kind of online activity, and so will able to challenge the findings if necessary. Secondly, there is no proof that the fall in music piracy on BitTorrent is down to Spotify's launch. However, the fact that a similar correlation has been observed in other countries around the world strongly suggests there is a link. Finally, it's worth noting that this new research comes at an opportune moment. As Mike has pointed out, Australia is planning to tackle online copyright infringement by implementing what amounts to a Hollywood "wishlist" of measures. Maybe the government there should start paying attention to the evidence of what works and what doesn't, rather than accepting the copyright maximalist dogma without question. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+ Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
Okay, ever since our big Net Neutrality Crowdfunding, we've had some new readers who aren't as familiar with the details and issues -- yet we've been mostly writing as if everyone is informed of the basics. So, we figured it only made sense to take a step back and do a bit of an explainer about net neutrality. What is net neutrality? This is not an easy answer, actually, which, at times, is a part of the problem. The phrase, first coined by law professor Tim Wu, referred originally to the concept of the end-to-end principle of the internet, in that anyone online could request a webpage or information from any online service, and the internet access provider (usually called internet service providers or ISPs) in the middle would deliver that information. At the time, the ISPs were starting to make noises about how they wanted to "charge" service providers to reach end users, effectively setting up toll booths on the internet. This kicked off in earnest in October of 2005, when SBC (which became AT&T) CEO Ed Whitacre declared that internet companies were using "his pipes for free." The phrase has been warped and twisted in various directions over the years, but the simplest way to think about it is basically whether or not your ISP -- the company you pay for your internet access (usually cable, DSL or fiber, but also wireless, satellite and a few others) -- can pick winners and losers by requiring certain companies to pay the ISP more just to be available to you (or available to you in a "better" way). John Oliver probably summarized it best by arguing that it's about "preventing cable company fuckery" (though, to be clear, it goes beyond just cable companies). The internet access providers claim that service providers, like Netflix and Google, are getting a "free ride" on their network, since those services are popular with their users, and they'd like to get those (very successful) companies to pay. Wait, so internet companies don't pay for bandwidth? They absolutely do pay for their bandwidth. And here's the tricky part of this whole thing. Everyone already pays for their own bandwidth. You pay your access provider, and the big internet companies pay for their bandwidth as well. And what you pay for is your ability to reach all those sites on the internet. What the internet access providers are trying to do is to get everyone to pay twice. That is, you pay for your bandwidth, and then they want, say, Netflix, to pay again for the bandwidth you already paid for, so that Netflix can reach you. This is under the false belief that when you buy internet service from your internet access provider, you haven't bought with it the ability to reach sites on the internet. The big telcos and cable companies want to pretend you've only bought access to the edge of their network, and then internet sites should have to pay extra to become available to you. In fact, they've been rather explicit about this. Back in 2006, AT&T's Ed Whitacre stated it clearly: "I think the content providers should be paying for the use of the network - obviously not the piece for the customer to the network, which has already been paid for by the customer in internet access fees, but for accessing the so-called internet cloud." In short, the broadband players would like to believe that when you pay your bandwidth, you're only paying from your access point to their router. It's a ridiculous view of the world, somewhat akin to pretending the earth is still flat and at the center of the universe, but in this case, the broadband players pretend that they're at the center of the universe. Why is this suddenly big news again? Haven't people been fighting about this for years? After the last fight over this issue, the FCC issued some pretty weak "open internet" rules, in an attempt to try to appease everyone by effectively creating a "compromise" based, in part, on an agreement negotiated by Verizon, AT&T and Google. Rather than putting in place strong rules to protect an open internet, the FCC's rules were fairly limited and sought to block more egregious forms of discrimination while increasing transparency. However, the rules did not even apply to wireless access and left a bunch of other loopholes -- for example, as long as the broadband players could make a halfway credible claim that what they were doing was for "the security and integrity of the network," it would be allowed. Even though it was part of the negotiations for the rules, once in place, Verizon sued, claiming that the FCC had gone beyond its mandate in issuing the rules. Following a long court battle, in February of this year, the appeals court ruled that, indeed, the FCC had overstepped its boundaries, and the open internet rules were not enforceable. The ruling effectively said that the part of the law that the FCC had used as the basis for its compromised open internet rules, Section 706 of the Telecommunications Act of 1996, did not allow for the rules it presented. It did, however, suggest that Section 706 gave the FCC some fairly broad powers that might be used instead. In the following months, the FCC's Chair, Tom Wheeler, tried to craft a new set of rules, basically looking to rewrite the existing (already weak) rules with the guidance the court gave. The big problem is that based on the February ruling and Section 706, Wheeler basically had to replace a block on "unreasonable discrimination" with an argument saying that any priority efforts had to be "commercially reasonable." A switch from "unreasonable discrimination" being forbidden to "commercially unreasonable discrimination" being forbidden doesn't sound like that big of a difference. Well, remember that the original rules weren't very strong in the first place. Secondly, the term "commercially reasonable" means something fairly specific, and it makes it much more difficult for the FCC to prevent internet access providers (big cable and telcos) from picking winners and losers. In short, under these new rules, the cable and telco companies can put in place restrictions on internet companies, and then only after that happens, those companies can go to the FCC and challenge them as being "commercially unreasonable." This is a long, difficult and expensive process. And, rest assured, the cable and telco companies have some of the best and most experienced lawyers around when it comes to appearing before the FCC (or, later, facing off with the FCC in court). A small startup would have to basically go broke arguing before the FCC that certain rules are commercially unreasonable, and there's a decent chance it would still lose to much more powerful lawyers with much more experience. Even if a startup could win in such a fight, it would be a huge time and money waster. What's all this stuff about "Title II" What many net neutrality advocates are asking the FCC to do is to "reclassify" broadband under Title II of the Telecommunications Act of 1934, effectively classifying broadband providers as "common carriers," which would allow the FCC to (1) have more power over them and (2) have more of a mandate towards rules and regulations that would stop those services from picking winners and losers among internet-based services. In short, Title II would give the FCC more power to "prevent cable company fuckery." Why are some people so opposed to Title II? There are a bunch of reasons -- some of which are more reasonable than others. They range from things like the simple idea that it's crazy to try to regulate modern communications systems under a law from 1934, to concerns about too much regulation "chilling investment" in broadband, to fears about lawsuits that will come about concerning the whole reclassification process. Wait, why aren't broadband providers already considered "common carriers" -- it seems obvious that they should be? Well, not everything is obvious. A decade ago, there were questions about whether or not cable broadband providers were technically "telecommunications" services (classified as common carriers under Title II) or if they were providing an "information service" (not under Title II and not a common carrier). The FCC (as always, under tremendous lobbying pressure from the cable companies) claimed that cable modem service should be exempt from Title II regulations. This was challenged, but the Supreme Court sided with the cable companies (and the FCC) in saying that this ruling made sense. Soon after that, as people questioned whether or not such a rule also applied to DSL lines, the FCC also reclassified DSL outside of Title II. If the Supreme Court already said that, can the FCC switch back now? Yes, though it is a somewhat complicated process (though not nearly as complicated as the telcos and cable companies would have you believe). It will also almost certainly be fought in court and it will be a few years before a final ruling is made as well. It is certainly doable, however. Do we really want a law from 1934 ruling over modern internet access systems? In an ideal world, probably not. But this isn't an ideal world. If we lived in a better world, Congress would update the Telecommunications Act to take into account what's actually happening online. But we all know about how well Congress works (i.e., it doesn't). And when it comes to political hot potatoes like telco policy, where there are tremendous lobbying dollars at stake, not only would it be nearly impossible to get anything through Congress, there's a better than decent chance that anything that did get through would be... messy and potentially even worse. I've heard that reclassifying will lead to internet companies like Google also being required to live under Title II rules including antiquated issues like tariffs and having rate settings. This is a little myth that the telcos and cable companies have been spreading. Yes, there's a ton of unrelated crap under Title II (again, why it's not ideal, but the best of a terrible list of options). But there's a (mandatory) process under the law by which the FCC must "forbear" from applying regulations that the FCC determines are not necessary for protecting consumers and thus would not be in the public interest. The forbearance process has been used numerous times, and most of the people advocating for reclassification under Title II are also doing so in combination with recommending forbearance against those obsolete and unrelated parts of Title II, beyond the narrow issue of stopping the internet access providers from picking winners and losers on the network. Should we be afraid that reclassification will create a massive legal battle that leaves everyone uncertain for years? No. First of all, no matter which way the FCC goes, there's likely to be a big legal battle that will go on for years. While Comcast and AT&T have more or less said that they would accept the rules under 706, Verizon has made it pretty clear that it would challenge them, just as it challenged the original open internet rules. Second, we already went through a big legal battle over the original rules for the last four years, and there was little indication that that legal battle had any impact one way or the other on broadband deployment or any other innovation. Won't Title II reclassification cause the big broadband providers to give up all hope, stop investing in broadband and destroy all that is good and holy about broadband in the US? Uh, no, though that's the story that the companies will tell you. They'll also leave out the fact that they actually really, really like to be classified under Title II when it comes to getting tax breaks, subsidies and rights of way for installing their lines in the first place. Also, the largest period of investment in broadband infrastructure happened before the big Brand X Supreme Court decision, when broadband was still considered to be under Title II. Other areas of telecommunications, including mobile phone service, are still classified under Title II, and there's a ton of investment going on in that space. The claims that Title II will chill investment have little basis in reality. Even so, shouldn't we be at least a little uncomfortable about "regulating the internet"? Yes, we should always be somewhat concerned about internet regulations, but this part of the internet is already heavily regulated. Remember how Verizon begged to be classified under Title II to install its lines? Installing cable, fiber and other broadband infrastructure already involves tremendous regulatory systems, in which local governments are granting all sorts of subsidies, rebates, tax breaks and allocating spectrum to these companies -- basically having the public pay. And all of this is heavily regulated. The real question here is under which regulations this will happen. It's not about suddenly "taking over" the internet or "regulating the internet," it's about which laws will be used for a process that is already highly regulated. What about all this stuff with Netflix being slowed down by Comcast, Verizon and AT&T and paying to be sped up? That's a related issue, but slightly different. That concerns "interconnection." Historically, net neutrality was just about "the last mile" -- the connection point between you as an end user and your internet access provider's router. However, there are many other issues happening beyond that, including interconnections between giant companies moving lots of traffic back and forth across the internet. Sometimes this happens via transit agreements and sometimes via peering arrangements (which are usually free). In the last year or so, the biggest broadband players -- Comcast, Verizon and AT&T -- appeared to be letting their connections to Netflix clog up at their border router, slowing down the delivery to end users. Effectively, these big broadband providers had figured out a different way to accomplish the same result: getting big internet companies to pay extra to reach you efficiently. By letting their ports clog, they've really just moved the problem upstream to another point they control, and getting Netflix (for now, but soon others) to pay up, even though there's plenty of bandwidth on all sides. All the broadband players need to do is connect a few cables to turn on a few more ports, a trivial and inexpensive process. Historically, most people following this space never expected interconnection to be a problem, because what kind of sick broadband company would purposely let its own ports clog up and deliver such a crappy experience to consumers? The answer, apparently, is Comcast, Verizon and AT&T, once they realized that they're basically the only game in town and that they could squeeze a lot of money out of internet companies. So, in the end, while interconnection wasn't originally considered a "net neutrality" issue, it is. It's the same basic concept concerning "broadband company fuckery" in picking winners and losers and harming your internet connection. Unfortunately, however, FCC boss Tom Wheeler has said he doesn't yet consider it a net neutrality issue (even if he did instruct the FCC to begin investigating these agreements). Thus, even if the FCC reclassifies broadband under Title II, the interconnection loophole may still be a powerful tool for broadband fuckery. But I've heard that Comcast supports net neutrality? It's been running all these ads saying that. The company is lying. Or, at the very least, it's being incredibly misleading. What it supports is Chairman Wheeler's proposal to use Section 706, which we already explained earlier is the path by which net neutrality dies. Furthermore, Comcast is effectively "required" to abide by the old net neutrality rules as a condition of its merger with NBC Universal a few years ago -- and it was the one that proposed the condition, knowing full well that it didn't really limit the company and its plans for setting up toll booths. But is the internet really neutral already? Don't some companies already have faster access than others? This is another misleading argument made by the broadband companies and their supporters. Yes, big companies will often have faster connections or more use of content delivery networks that cache content and make it available closer to the end points so that it's faster to access. But that's about improving access for everyone online, not about a particular broadband company charging the companies to better reach its users. Again, it goes back to the question of whether or not the broadband providers are picking winners and losers. If you don't like what your broadband provider is doing, why don't you just switch to a competitor? That would require real competition, which there is very little of in the US. While broadband providers like to point to things like mobile data offerings or Google Fiber as proof of competition, the truth is that there is very little real competition in the US for broadband services, when broadband is properly defined. Most places have one cable option and one DSL/fiber option, mostly from the large players mentioned above. Basically as you get into true broadband speed ranges, competition almost entirely disappears. And, even where there is competition, it may be getting even weaker, as Verizon is basically pushing its own users to cable, and has effectively stopped expanding its fiber offering. Verizon has made it clear that it wants to focus on wireless. So what about wireless? Isn't that competition? Not really. Most mobile data offerings are incredibly limited, slow and much more expensive than DSL/fiber/cable. They tend to have ridiculously low caps (usually on the order of 5GB) and restrictions on things like streaming. Many have terms that effectively bar you from using it as a home broadband replacement. Is it possible that these wireless offerings will eventually be true competition? Maybe, but it's still a long way out. Besides, as currently in place, the open internet rules don't even apply to wireless data anyway, and the largest players in the space are... Verizon and AT&T already. So, wait, how is wireless a real competitor? Google Fiber! Doesn't that prove there's competition? Google Fiber is a really interesting experiment, but it's only in a very few locations and expanding pretty slowly. There's little indication that there are any plans to make it a nationwide or even widespread offering. Besides, Google has also backed away from its early promise to allow competing networks to use its infrastructure. Do we need more competition in broadband? Hell yes. For basically a decade we've been saying that the risk of losing net neutrality is more of a symptom of a lack of competition. And, in fact, we've seen that when things like Google Fiber do show up, offering viable competition, the incumbents suddenly start ramping up their own offerings. Funny how that happens. Okay, then how do we get more competition? There are a bunch of possible options, though none are particularly easy or definite at this point. One idea is to encourage open access networks instead of just facilities-based competition. Under such a system, the broadband infrastructure players would wholesale their internet services to third-party service providers who could then offer service directly. The internet world used to work this way, prior to the original broadband reclassifications. There's little indication that the FCC is even considering pushing the big broadband providers to go back to wholesaling their connections, but it's an idea that has some amount of merit. Australia started down this path years ago, but that's been tied up in politics. There have been some other ideas designed around encouraging similar infrastructure competition, such as the "homes with tails" idea, where individuals would own the connection from their home to a network where services could compete. The basic thinking here is that the core infrastructure is costly to install and inefficient to do multiple times in multiple ways (which is part of the reason why we have so little competition). Thus, rather than focusing on competition at the infrastructure level, you can put the competition at the service level and have multiple providers on the same network. It's effectively a "natural monopoly" argument, akin to the highway infrastructure. You don't want "competing" highways, because that's wasteful and inefficient. So you build one (massive, super fast) infrastructure, and then wholesale it out to lots of competitors. For now, this idea seems to have almost no support at the policy level, however. Much more focus these days is on municipal networks and their ability to offer local competition. I'll expand this a bit to suggest that some local private networks (including Google fiber) are in the same camp. Allowing more local area competitors has long been shown to improve all connections as the incumbents freak out and realize they really have to compete. Many muni-broadband providers get a bad rap because they're derided as "government-run" or "local utilities." And, indeed, some attempts at municipal broadband have failed badly (often due to bureaucratic incompetence). That said, there are a growing number of successful muni-broadband implementations that offer real competition -- and often better services at a lower price. Great! So let's get muni-broadband competitors everywhere! Not so fast, sparky. The big broadband providers (them again!?) have been able to pass laws in about 20 states that either ban outright or severely limit the ability of local municipalities to offer such broadband to residents. The big broadband providers have done little to hide the fact that these bills were written by the broadband companies themselves and designed solely to limit this kind of competition. While Tom Wheeler did make a statement earlier this year claiming that he would use the FCC's power to preempt such laws if they were blocking competition, this caused the big broadband players and their friends to freak out. Congress is now trying to stop the FCC from being able to move forward on such plans. The claims by supporters of such bans are ridiculous. They usually argue that states have the right to "make their own choice" about these kinds of laws without federal interference. However, they are then leaving out the fact that the states tend to be blocking cities from making their own choices to create municipal broadband competitors. The simple fact is that this is a messy front in the broadband players' war against competitors. In an ideal world, cities and states would actually be making it easier to enable competition (whether private or muni-) and then get out of the way. Once again, we don't live in an ideal world. What about other forms of competition? For years the FCC has been holding out for some miraculous new broadband method, and it's failed to show up. Under Michael Powell, the FCC insisted that "broadband over powerlines" would present a "third pipe" into the home to compete with phone and cable lines. This was despite multiple reports noting that broadband over powerlines was not a particularly good way to do broadband (especially with the way the US sets up its electrical grid). We already discussed wireless competition above. There is the potential that if there were much more spectrum made available, new competition might spring up, but the FCC (them again?) hasn't been able to make that much spectrum available (a whole different issue for a whole different day). There's also satellite broadband which has gotten much better in the past few years, but is still limited by reliability problems and crappy latency. For years, we've made fun of the claims of satellite broadband providers for never living up to their promises. There may be some promise there, however, especially as satellites and space launches are getting much cheaper. There may be some exciting developments there in the future, but it's still a ways off. And what's this I've been hearing about data caps? Another somewhat related issue (which the FCC insists is not a net neutrality issue, but certainly does fall under the "broadband company fuckery" label), is that broadband companies are increasingly interested in putting data caps on your broadband usage, trying to get end users to pay more. This has taken a variety of forms -- some more draconian than others -- but the broadband providers have made it clear they'd like to use it as a way to get more money out of users. Yes, they always pretend it's about getting low bandwidth users to pay less, but there's little actual focus on that, because why would they, other than for PR reasons? So what's going to happen now? Well, chances are that before the end of the year, the FCC will officially announce the new rules that it wants. If there's enough public and political support for it, they might actually vote to reclassify internet access under Title II, but so far Tom Wheeler has been afraid to go there. If Wheeler chickens out (as is more likely), they'll stick with the plan using Section 706, opening up "commercially reasonable" fuckery. Either way, there are likely to be lawsuits (with Verizon leading the charge), and nothing will be determined finally for a few years. Congress could act, but won't. The public pretty clearly wants reclassification under Title II. So do many, many internet companies who know they'd be targets (or wouldn't even be able to exist at all) under a system where the broadband access providers get to set up tollbooths. But, tragically, things in DC don't happen just because the public wants something. Reclassifying would also lead to a political fight in Congress. Why is Congress so messed up on this? For reasons that still don't make much sense, sometime around 2006, net neutrality went from a wonky issue that wasn't particularly partisan, to a stupid partisan issue in which Republicans decided it was "regulating the internet," and Democrats deciding that it was about free speech. Neither is entirely accurate, though the Democrats are much more accurate. As stated above, the internet is already regulated. The reality is that the Republicans arguing against net neutrality tend to be those who often (you guessed it) receive the most money from the big broadband players. It's unfortunate and silly that Republicans -- who claim to be the party of business and innovation -- haven't yet realized that startups and innovators are actually helped by a neutral internet with real competition. So what should I be doing? Make some noise. Join the effort to send comments to the FCC. While many have argued the process is a foregone conclusion, it's not. If there really is enough support for reclassifying, it can absolutely happen. Not helping because you don't think it will make a difference is only a self-fulfilling prophecy. You can be cynical, right and end up with a limited internet... or you can be idealistic, right and have a chance at creating real change with a more competitive, open internet. Your choice. Anything else? That's about it for now, but feel free to submit more questions in the comments. Also, special thanks to everyone who supported our net neutrality reporting crowdfunding effort, which has helped make posts like this possible.Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
Well, here's a nice surprise. The Federal Court in the Southern District of NY has issued another very nice ruling on copyright issues relating to fair use. As we'd mentioned last year, Fox had sued the company TVEyes claiming copyright infringement. TVEyes is a media monitoring company that records tons of TV and radio, creates transcripts of it all and makes it searchable for internal use. Customers include the White House, Congress and a variety of news services, including the NY Times. Fox is Fox. It insists that what TVEyes is doing is infringement and that it violates the hot news doctrine (the silly, obsolete, court-made law that says news reporters have some non-copyright proprietary right to breaking news they first publish). Judge Alvin Hellerstein, however, is not buying Fox's argument. Why? Fair use. The Court goes through the typical four factors test and finds them to pretty squarely put TVEyes' usage into the "fair use" camp. Purpose and character of the use: Goes to TVEyes following a very long discussion. Even though it's a "for profit" company, the court says that its transformative and fits with the stated purpose of fair use. The court relies heavily on the recent book scanning decisions that found that scanning full books to make searchable indexes is transformative fair use. That's good. It also points to the recent Swatch v. Bloomberg case, in which a full transcript of an investor call was considered fair use, even if for commercial reasons. The court admits that the Meltwater case may be the one that goes against this, but notes that it's basically the only one (adding emphasis to our belief that Meltwater was decided incorrectly). The growing consensus appears to be that indexing content to make it more findable is transformative fair use, and the court agrees in this case: I find that TVEyes' search engine together with its display of result clips is transformative and "serves a new and different function from the original work and is not a substitute for it." ... In making this finding, I am guided by the Second Circuit's determination that databases that convert copyrighted works into a research tool to further learning are transformative. TVEyes' message, "'this is what they said' -- is a very different message from [Fox News'] -- 'this is what you should [know or] believe.'" ... TVEyes' evidence, that its subscribers use the service for research, criticism, and comment, is undisputed and shows fair use as explicitly identified in the preamble of the statute. The nature of the copyrighted work: Here the judge finds neither side wins the fair use debate. Basically, he says "where the creative aspect of the work is transformed, as is the case here, the second factor has limited value." The amount and substantiality of the portion used: Once again, neither side wins. The court says, yes, obviously, TVEyes is using all of Fox's TV content, but rather than taking the simple way out, recognizes (as other courts have) that this is really not about the total amount used, but the amount used in relation to the amount necessary to use. And thus, in this case, using everything is necessary, given what TVEyes is trying to do. As the court notes, "TVEyes' service requires complete copying twenty-four hours a day, seven days a week." The effect of the use upon the potential market for the value of the copyrighted work: This one goes to TVEyes again. As the court notes, the economic harm here has to be based on whether or not TVEyes is acting as a substitute, and not based on any economic harm created by the transformative use (i.e., not from a market that Fox isn't serving). The court notes that Fox speculates that people might use TVEyes to avoid watching Fox directly, but has no evidence to support that: "Fox News' allegations assume that TVEyes' users actually use TVEyes as a substitute for Fox News' channels. Fox News' assumption is speculation, not fact. Indeed, the facts are contrary to Fox News' speculation." More specifically: First, none of the shows on which Fox News' suit is based remain available to TVEyes subscribers; TVEyes erases content every 32 days. Second, in the 32 days that these programs were available to TVEyes' subscribers, only 560 clips were played, with an average length of play of 53.4 seconds and the full range of play being 11.5 seconds to 362 seconds. Of the 560 clips played, 85.5% of the clips that were played were played for less than one minute; 76% were played for less than 30 seconds; and 51% were played for less than 10 seconds. One program was not excerpted at all. The long term TVEyes statistics are consistent with the specific statistics of the 19 programs. From 2003 to 2014, only 5.6% of all TVEyes users have ever seen any Fox News content on TVEyes. Between March 31, 2003 and December 31, 2013, in only three instances did a TVEyes subscriber access 30 minutes or more of any sequential content on FNC, and no TVEyes subscriber ever accessed any sequential content on FBN. Not one of the works in suit was ever accessed to watch clips sequentially. The record does not support Fox News' allegations. Fox News fails in its proof that TVEyes caused, or is likely to cause, any adverse effect to Fox News' revenues or income from advertisers or cable or satellite providers. It goes on in that nature for some time. And then concludes: "No reasonable juror could find that people are using TVEyes as a substitute for watching Fox News broadcasts on television. There is no history of any such use, and there is no realistic danger of any potential harm to the overall market of television watching..." Given all that, the victory goes in a landslide to TVEyes on fair use grounds -- with one exception: However, I do not decide the issue of fair use for the full extent of TVEyes' service, TVEyes provides features that allow subscribers to save, archive, download, email, and share clips of Fox News' television programs. The parties have not presented sufficient evidence showing that these features are either integral to the transformative purpose of indexing and providing clips and snippets of transcript to subscribers, or threatening to Fox News' derivative businesses. So that part of the case is likely to go forward (though it may wait while Fox appeals the other part). The court also tosses out the hot news claim, saying that it's preempted by the copyright claim, and Fox News' attempt to double dip via hot news won't be allowed. It sites a case that says if you're just reposting factual information, it's covered by the Copyright Act. The court notes for TVEyes to run afoul of the hot news doctrine, it would need to be "free riding" in a manner designed to "scoop" Fox's efforts, rather than accurately covering what Fox is broadcasting. "TVEyes is not a valuable service because subscribers credit it as a reliable news outlet, it is valuable because it reports what the news outlets and commentators are saying and therefore does not "scoop" or free-ride on the news service." In the end, it's a good ruling for fair use (and against hot news), though it seems likely that Fox will appeal.Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
To the "savvy" political insiders, political corruption is still not seen as an election issue that people care about or vote over. We've been discussing a number of attempts to change that -- such as with new anti-corruption PACs -- and two of the political races we've discussed ended yesterday. In both cases, the candidates lost, but they way outperformed their expectations, suggesting that there's a real possibility of a better reaction in the future. In NY, the Governor/Lt. Governor primary ticket of Zephyr Teachout/Tim Wu was always a tremendous long shot. Going up against a popular incumbent governor in Andrew Cuomo (who also has tremendous NY name recognition as the son of a former -- also tremendously popular -- NY governor), the media more or less ignored any possibility of Teachout succeeding. The campaign had little money and no real established political base. It ran almost entirely on the basis of "Hey, Cuomo is kind of corrupt and lies a lot." Before the election yesterday, an analysis of similar races suggested that incumbent governors in similar primaries often get over 90% of the vote and anything under 70% would be a political disaster for Cuomo, who is hoping to leverage his success in NY into an eventual presidential run. While he did eventually win, it was with about 62% of the vote. Teachout got 34% -- again, with no political machine and very little money. Wu ended up with just over 40%, and his opponent Kathy Hochul (Cuomo's choice) got under 60%. Obviously, a win would have been a bigger deal, but to come out of nowhere (in just a couple of months), with no huge campaign war chest or connections to traditional politics -- against such a well-known governor, basing most of their campaign on corruption issues -- this suggests that corruption absolutely can play as an election issue. Teachout and Wu had one paid staffer and four volunteers. Cuomo has a campaign war chest in the many millions. And he still could only barely crack 60% of the vote. That says something. Also interesting is the fact that Teachout and Wu actually won in many rural upstate counties. The campaign had been expecting a weaker showing there (Hochul is from upstate, and Teachout and Wu are based in Manhattan -- which they also won). Again, while losing the overall race, the strong showing is a good sign for future campaigns. Meanwhile, up in New Hampshire, we'd discussed the campaign of Jim Rubens for the Senate, against carpetbagging Scott Brown (who jumped states from Massachusetts after losing his Senate seat there). Early on, Rubens was basically a complete nobody. While he'd been in NH politics in the past, he hadn't actually occupied a political office since the 1990s. He was basically roadkill for the political machine of Scott Brown. However Larry Lessig's Mayday PAC noted that Rubens was the only Republican candidate running on an anti-corruption platform to limit the influence of money in politics. Mayday PAC spent heavily on campaign ads for Rubens, and he ended up getting around 24% of the vote, with Brown pulling in less than 50%. In the end, both of these campaigns obviously lost -- but they were interesting experiments with important lessons. Two upstart campaigns from totally different sides of the traditional political spectrum (Zephyr/Wu to the "left" and Rubens to the "right"), both of which made anti-corruption efforts a key plank in their campaigns. Both were considered barely worth mentioning at the beginnings of the campaigns. Both were up against incredibly well-known, well-funded political machines with national name recognition and ambition. Neither campaign had any significant money. And both actually performed decently despite their disadvantages. In the end, both campaigns definitely did lose, but they showed how there's clearly a dissatisfaction with the traditional political machine. And if two such tiny, out-of-nowhere campaigns could do that, hopefully it means that future campaigns can do even more.Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
One of the most shocking revelations from the Snowden documents was that the NSA and GCHQ are running "man-in-the-middle" (MITM) attacks against Google -- that is, impersonating the company's machines so as to snoop on encrypted traffic to them. They are able to do that through the use of secret servers, codenamed Quantum, placed at key places on the Internet backbone, which therefore require the complicity of the telecom companies. Of course, in countries like China, arranging for Internet streams to be intercepted in this way is even easier, so perhaps the following story on greatfire.org should come as no surprise: From August 28, 2014 reports appeared on Weibo and Google Plus that users in China trying to access google.com and google.com.hk via CERNET, the country's education network, were receiving warning messages about invalid SSL certificates. The evidence, which we include later in this post, indicates that this was caused by a man-in-the-middle attack. Greatfire.org's analysis of why China is using MITM attacks against Google on the education network, rather than simply blocking access completely, is particularly interesting. The problem for the Chinese authorities is that Google has now implemented HTTPS by default: Google enforced HTTPS by default on March 12, 2014 in China and elsewhere. That means that all communication between a user and Google is encrypted by default. Only the end user and the Google server know what information is being searched and returned. The Great Firewall, through which all outgoing traffic from China passes, only knows that a user is accessing data on Google’s servers -- not what that data is. This in turn means that the authorities cannot block individual searches on Google -- all they can do is block the website altogether. This is what has happened on the public internet in China but has not happened on CERNET. The reason is that access to Google is simply too important for the research community in China. Blocking Google entirely would therefore be counterproductive for the country's future: The authorities know that if China is to make advances in research and development, if China is to innovate, then there must be access to the wealth of information that is accessible via Google. CERNET has long been considered hands off when it comes to censorship, for this very reason. The MITM approach offers the perfect solution: it allows researchers to get most of the benefit of Google's huge Internet index, but can be used to block selective search queries or results when people try to access sites or information that Chinese authorities want to censor. As the Greatfire.org post suggests, the increasing use of encrypted connections for online services means that MITM attacks are likely to become much more common -- and not just in China. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
A whole bunch of startups and activist groups are taking part today in Internet Slowdown Day -- showing just what kind of internet we might be facing if the FCC caves in to the pressures of the big broadband access providers, allowing them to set up tollbooths on the internet, pick winners and losers, and generally limit the ability of innovative new startups to face an even playing field online. It would enable the big broadband players to double charge service providers, limit upstarts and competition, and generally make the overall internet a lot less dynamic and innovative. For these reasons and more, we're quite concerned with where the FCC is heading, and are joining in today's protests -- we hope you will too. If you missed them, you can read our comments to the FCC on this matter, and you still have a few more days (until the 15th) to file your own. Later today, we'll also be posting a blockbuster "everything you need to know about net neutrality but were afraid to ask" post, and will likely have a few more posts on the topic as well... Stay tuned...Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
We don't know a whole lot about how our own brains learn or even store/retrieve information, so it seems a bit difficult to expect anyone to build a computer that copies human cognitive functions with any degree of reliability. Still, researchers are trying to build computer models of human brains and brain functions, and if they do figure it out (and learn more about how human intelligence works along the way), we could see some real advances in artificial intelligence. Some of these efforts have been going on for decades, so hopefully, some new angles on machine learning will pan out. BabyX is an animated toddler simulation that is designed to mimic the facial expressions and actions of a small child. This virtual baby looks like a creepier version of Max Headroom with less wit... and it's not nearly realistic enough if it doesn't include tantrums and crying. [url] Cyc has been learning for about three decades now, and you can even download a version of OpenCyc that was last updated in 2012. We noted Cyc when it got some attention in 2001, but since then it's been kinda quietly learning... probably plotting an evil scheme to get rid of us pesky humans. [url] Robo Brain is yet another project aiming to train a computer about the real world from vast sources of publicly available information. We've seen other AI projects data mining Wikipedia, as well as other even less reputable internet repositories. [url] If you'd like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.Permalink | Comments | Email This Story

Read More...
posted 13 days ago on techdirt
We've been waiting quite some time for the government to finally get around to releasing parts of the $40 million 6,300 page CIA torture report, which will detail how the CIA committed torture, lied about it, and how that torture did nothing even remotely effective. As you may recall, the Senate Intelligence Committee, which wrote the report, voted back in April to declassify the 480-page "executive summary" which was written to be declassified. That is, the really secret stuff is buried in the other 6,000 pages or so. Given that, the expectation was that the exec summary would need minimal redactions. Of course, the White House asked the CIA to handle the redactions, and considering that the report makes the CIA look bad, the CIA suddenly became quite infatuated with that black redaction ink. The report came back to the Senate Intelligence Committee with significant redactions, so much so that the Intelligence Committee declared it unacceptable and even argued that the choices in redactions made the report incomprehensible. Since then there's been back and forth fighting over it, with some reports suggesting that the (still redacted) report might finally come out in the next week or two. However, those plans are on hold, as apparently the White House and the Senate Intelligence Committeestill can't agree on redactions, leading some to say the report won't be released until November at the earliest. Once again, we're left wondering why the Senate Intelligence Committee won't just go with plan B and release the damn thing themselves. All of this delaying only works to the CIA's advantage. The CIA has no incentive at all to compromise and come to agreement on the redactions since it wants the report hidden. And, yes, the White House claims to want the report released and it's got the final say over the CIA, but its actions to date have not suggested that the White House is particularly serious about getting this report out there.Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
If you're not a sports fan, you probably have no idea who the hell Ray Rice is. If you are a sports fan, you may know that he was recently banned from playing in the NFL. The background on this is quick. Months ago, TMZ released a video from an Atlantic City casino showing Rice dragging his then-fiance out of an elevator. It was clear she was out cold. It was also acknowledged by Rice that they had had an altercation, though specifics weren't discussed. All the public knew was what they saw in the video and that Rice had agreed to enter into a treatment program to avoid prosecution, since his then-fiance refused to press charges, and indeed married Rice weeks later. Once the public got wind of all this, Rice and his wife held a press conference. The Baltimore Ravens, the NFL team for whom Rice plays football, for reasons unfathomable to this writer, decided to live-tweet the press conference, including retweeting statements by Rice's wife that made many people sick to their stomach. It's important to understand the context in which the Ravens were putting these tweets out. In the wake of the video of Rice dragging his fiance out of an elevator, and in conjunction with live-tweeting this press conference, the team, its executives, and its head coach were all rushing to the defense of Ray Rice. Even after the NFL suspended Rice a laughably lenient two games out of the season for the incident, the Ravens' website was full of glowing reports about their running back, their head coach was talking about how Rice is a "heck of a guy" and the lenient suspension was a good lesson for children, and NFL broadcast partners were asking Rice what his wife's words of encouragement were for him in a pre-season game. It's in that context that the tweet above was put out, appearing to confirm that the woman who was knocked out cold had it coming to her. Then this video was released earlier this week. That's Ray Rice one-shot knocking his then-fiance out from inside the elevator. And just like that, the Baltimore Ravens decided it was time to delete many of their tweets supporting Rice, including the one above that referenced Janay Rice doing what way too many women do in domestic violence incidents: blame themselves. For some reason, whoever is running social media and/or PR for the Ravens apparently doesn't understand the Streisand Effect, because deleting those tweets now has those same tweets back in the news today, now that the NFL has upped Rice's suspension to indefinite. Instead of admitting any mistakes, or acknowledging any regrets, the team attempted to erase their misdeeds from the internet. Sorry, guys, the internet doesn't work like that. Enjoy all that terrible publicity you generated for yourselves! Next time maybe just don't be so quick to try to blame the victim of a violent crime. Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
Last year, the Supreme Court made an important ruling in the Myriad Genetics case, effectively saying that genes aren't patentable, even if you can separate them out from the rest of a strand of DNA. Myriad Genetics had isolated two key genes related to breast cancer, BRCA1 and BRCA2 and argued that only it could test for those genes, because of its patent. The Supreme Court soundly rejected that, noting that you cannot patent something in nature, and clearly Myriad did not "make" the genes. Unfortunately, as we'd noted just a few months earlier, a court in Australia had come to the opposite conclusion, saying that Myriad Genetics had legitimate patents on BRCA1 and BRCA2. That case was appealed, and there was some hope that after the US's ruling, higher courts in Australia might see the light. Not yet apparently. An appeals court has agreed that genes are patentable Down Under, which means that such important genetic tests there are likely to be much more expensive and limited. You can read the full ruling here if you'd like. The case can still be appealed to the Australian High Court, so perhaps it will take the same trajectory as in the US, where it needed the Supreme Court to finally point out the absolute insanity of patenting genes. Though, frankly, if Australia does keeps genes patentable, it might make for an interesting natural experiment to see how much innovation and research happens in both places -- one with, and one without, patents.Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
Things have gone rather quiet on the Snowden front, with the initial torrent of leaks slowing to a trickle. But separately from the documents that he's provided to journalists, there's the story of the man, still holed up in Russia, in a rather precarious legal position. That probably explains why he has been reluctant to leave that temporary but apparently safe haven. Now it seems that Switzerland is thinking about offering him safe conduct if he visits to testify about surveillance there. Here's David Meyer's summary in Gigaom: Sunday reports in Le Matin Dimanche and Sonntags Zeitung both cited a document, written by the attorney general last November in order to establish the legal situation around a potential Snowden visit, as saying an extradition request [from the US] would be rejected if the Swiss authorities saw it as political. The document stated that only "higher state obligations" could override this position. According to Marcel Bosonnet, reportedly Snowden’s legal representative in Switzerland, the position means that "the legal requirements for safe conduct are met," and Snowden has shown interest in visiting Switzerland. Glenn Greenwald, the journalist and Snowden confidante, has previously recommended that he take asylum there. There are close parallels with the situation last December, when Brazil too was keen to have Snowden's help in investigating surveillance of its citizens. Snowden wrote at the time: Many Brazilian senators ... have asked for my assistance with their investigations of suspected crimes against Brazilian citizens. I have expressed my willingness to assist wherever appropriate and lawful, but unfortunately the United States government has worked very hard to limit my ability to do so -- going so far as to force down the Presidential Plane of Evo Morales to prevent me from traveling to Latin America! Until a country grants permanent political asylum, the US government will continue to interfere with my ability to speak. Even leaving aside concerns about those "higher state obligations" that might override Switzerland's safe passage, Snowden must also rightly fear the US will try to seize him if he travels from Russia. The risks seem high, and for little direct benefit -- this is not, after all, the offer of permanent asylum that he is seeking. All-in-all, giving testimony via a video link seems a far safer option for him. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
I had honestly hoped that yesterday's story about the Huffington Post finally retracting its series of totally bogus articles (mostly written by Shiva Ayyadurai or his colleagues and friends, but a few by its actual "journalists"), pretending to argue that V.A. Shiva Ayyadurai had "invented email," would be the end of this story. Ayyadurai has built up quite a reputation around this false claim, even though it's been debunked over and over and over again. Ayyadurai keeps coming back, often moving the goalposts and changing his definitions, but still ultimately flat out lying in pretending to have "invented" email. To be clear, he did no such thing. Email was in wide use at the time he supposedly wrote his software. Ayyadurai, however, has cleverly used misleading (to downright false) claims to make what appears on its face to be a credible story, fooling a number of gullible reporters. The crux of his argument revolves around the copyright registration he obtained for a software program in 1982 called EMAIL. But, as we've explained over and over again, a copyright is just for a specific expression (i.e., that specific program), and not for "inventing" anything. The most obvious parallel would be Microsoft, which holds a copyright on "Windows" -- the operating system -- but did not "invent" the idea of a graphical user interface involving "windows." And yet, yesterday morning, everyone began flooding me with new stories about Ayyadurai, written by clueless entertainment reporters, all because Ayyadurai apparently got married to actress Fran Drescher. The "dating Fran Drescher" story has been making the rounds for a while now, and it was so random and unrelated that we'd ignored it in previous posts, even though one part of the HuffPo series was HuffPo Live talking to Ayyadurai about Drescher, in what was an incredibly awkward exchange (note: despite pulling most of the other articles about Ayyaduria, HuffPo left this one up). In the video (which has been taken down), Ayyadurai made this incredibly awkward "introduction" to Fran, in which he repeatedly highlights that he's just hanging out "in Malibu with Fran," and then says for emphasis "with Fran Drescher, who I'm dating." That leads Fran to jump into view, and the HuffPo live "reporter" Caroline Modarressy-Tehrani starts absolutely gushing over Fran. It was weird, but since it wasn't directly related to whole lie about "inventing email," we hadn't mentioned it. However, thanks to the "wedding," now it appears that tons of mainstream press reports are writing about the wedding and repeating the totally debunked claim about Ayyadurai "inventing" email. This has resulted in many people wondering if the whole HuffPo series was deliberately ramped up prior to the "wedding" to get the mainstream press to roll with the bogus claim. It's entirely possible, but considering that Ayyadurai has been trying to make this lie stick for years, it may just be a convenient coincidence. Either way, the mainstream press apparently is unable to do any fact checking and is repeating bogus claims as facts. Let's highlight a few: People Magazine, written by "reporter" Gabrielle Olya, not only falsely claims Ayyadurai invented email, but says he "holds the patent for creating email." This is all kinds of wrong. He doesn't "hold the patent for creating email." He didn't create email, and he only got a copyright (not a patent) on a program called EMAIL long after email had been created. The People Magazine piece links to the bogus, now retracted, HuffPo story. E-Online "reporter" Mike Vulpo falsely calls Ayyadurai "the inventor of email" and also links to the bogus, now retracted HuffPo story. Even more bizarrely, Vulpo links to the now debunked Washington Post articles from a few years ago (which have a huge correction apologizing for the misreporting on Ayyadurai) saying "reports say he holds the copyright to the computer program known as "email." Others say he indeed came up with the term "email" when he was in high school in the late 1970s. Pretty impressive, right?" I love the hedges "reports say" and "others say" while ignoring the fact that his claims to have "invented" email are debunked. And while this is slightly more accurate in noting that he has a copyright in a program called "email," it's not "the" computer program called EMAIL, which falsely implies it was the first one. Even more bizarrely, this same piece was reposted to "NBC Bay Area." You would think, being in the Bay Area, that they might have reached out to folks actually in the tech industry to debunk Ayyadurai's ridiculous claims. ABC News / Good Morning America "reporter" Michael Rothman falsely claims that Ayyadurai is the "inventor of email" and makes it even more stupid by saying that Ayyadurai is "widely credited with having invented email." This is not even remotely true. He is only credited with that by himself and a tiny group of friends. Rothman also doesn't appear to understand even the basics of copyright by saying that Ayyadurai is "the first person to hold a copyright for 'EMAIL.'" Again, all he did was write a program called EMAIL, long after email had been invented. It also claims that Ayyadurai "currently teaches at MIT." A search of MIT's staff directory does not actually return Ayyadurai as a current staff member. CBS News expands their reputation for skipping over any fact checking by saying Ayyadurai "holds the patent for inventing email." Again, basically everything in that statement is wrong. He doesn't have a patent for inventing email. He got a copyright (very different) on a program called EMAIL. And he didn't invent email. At least CBS News is smart enough not to put a byline on this bogus reporting, but it also quotes the Huffington Post. UPI has an article that doesn't mention Ayyadurai's false claims in the text of the article, but does falsely call him "email creator" in the headline (which may not have been written by the reporter who wrote the article). The Daily Mail is somewhat famous for its lack of reporting skills and fact checking -- and the publication lives down to its reputation in an article by Chelsea White, which again repeats the myth that Ayyadurai invented email. And while it claims there's "controversy" over the claim (there isn't: everyone except him and his friends know he didn't invent email) it repeats the bogus claim that he has a patent on email: "Dr. Ayyadurai - who owns the patent to email and is often credited as the inventor of the electronic mail system amid some controversy." It also links to the Huffington Post. US Magazine "reporter" Madeline Boardman more or less repeats verbatim what others are saying about Ayyadurai being "the inventor" of email and that he is "widely credited" as such. Headline and Global News "reporter" Dina Exil repeatedly calls Ayyadurai the inventor of email and also claims he "is known for being the first person to invent email," except none of that is true. He's known for pretending that. Popcrush "reporter" Michelle McGahan calls Ayyadurai "the inventor of email" and also falsely claims he "owns the patent for email." Now, considering that this just some random celebrity gossip, it's not that surprising that these "entertainment reporters" didn't bother to do any sort of fact checking. Why would they? And it's tough to fault them for going for the easy layup on the typical "famous person weds" story. But the problem here is that Ayyadurai has been focused on using any and all press mentions as "evidence" in his bogus campaign to declare himself the inventor of email, and now he has a number of other sources to cite, even though they're all totally wrong. It is worth noting that not everyone fell for the spin. The LA Times and San Francisco Chronicle both focused mainly on Drescher and more or less ignored Ayyadurai's bogus claims (though, the LA Times does say he's at MIT, which again, does not list him as a current staff member). The only publications I can find that really called out the bogus claims were Mashable, which noted that Drescher has married someone who "likes to claim he invented email" and Gawker, which noted that if Fran Drescher had actually read its previous articles about Ayyadurai, she might not have married him. What's funny is that in writing our series about the Huffington Post's bogus stories, some of our commenters insisted that this was actually proof as to why these "new media" players weren't trustworthy compared to traditional vetted media. And yet, above we have "trusted" media like ABC and CBS repeating totally false claims, while new media players like Mashable and Gawker are debunking them. Anyway, I'd like to think this story is now over, but somehow I get the feeling that Ayyadurai will continue to press his bogus claims again and again and again.Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
Last week, FCC boss Tom Wheeler pointed out two important things: (1) The FCC's definition of "broadband" internet service (4Mbps down / 1 Mbps up) was silly because it was way too slow for things that people do online, like streaming HD video and (2) if you go up to higher (more accurate) levels of broadband, competition in providers all but disappears. This was important on two accounts. The big broadband players have always pushed for keeping the "official" broadband standards as low as possible, in order to pretend that we have better and more competitive broadband than everyone knows we actually have. In the past, the FCC has been a willing accomplice in this charade. By showing the following chart, and suggesting that it was time to really jack up the official broadband standards, Wheeler was clearly signaling that perhaps those bad old days when the FCC was a partner in the big US broadband lie are over, and that it might actually start trying to represent reality and push for rules that actually make the US a competitive broadband player. Of course, the FCC had already asked for comments concerning the possibility of raising the official broadband definition to 10 Mbps down about a month ago, arguing that based on actual usage information, this would" fall within the mid-range needed by a three-user household with moderate broadband use, but would not accommodate demand for a three-user household with high use. Specifically, the FCC noted that this would allow a family of three "at periods to stream a movie, participate in online education, surf the web, and have a mobile device syncing to its email account." Fair enough. Except... no. Not according to the big broadband providers, which did the FCC comment-equivalent of a freak out at this possible proposal. Let's start with AT&T: Although the industry remains well ahead of the curve, the centerpiece of the Commission’s Notice is a proposal to change the definition of advanced capabilities – in particular, a proposal to increase the minimum “advanced” capabilities benchmark from 4 Mbps download speeds to 10 Mbps. Given the pace at which the industry is investing in advanced capabilities, there is no present need to redefine “advanced” capabilities, and, as discussed below, the proposed redefinition is not adequately supported. The Commission should undertake a more rigorous, fact-based and statutory analysis before determining what, if any, definitional revisions are warranted at this time. Even recognizing that the definition of broadband will evolve over time, the Notice presents no record basis for a conclusion at this time that a service of less than 10 Mbps is no longer “advanced.” AT&T insists that people really aren't using that much bandwidth, and that the FCC overestimates how much bandwidth things like streaming HD video really take. In a neat bit of tautological reasoning, AT&T actually argues that because people aren't using that much bandwidth now (perhaps because AT&T doesn't let them...), it's clear that this isn't a reasonable definition of broadband: Consumer behavior strongly reinforces the conclusion that a 10 Mbps service exceeds what many Americans need today to enable basic, high-quality transmissions. AT&T data show that, in areas where its customers have access to a service that offers download speeds greater than 10 Mbps, many consumers choose to buy services with lower download speeds. Indeed, even in areas where only a 6 Mbps service is available, a substantial portion of consumers choose to purchase a lower-speed service. Perhaps that's because your pricing sucks, and even when people do pay more, you do crappy things like throttle Netflix. Over to Verizon, which argues that raising the broadband speed definitions would be a problem because it might confuse people, and you know how much Verizon wants everyone to have a clear understanding of everything, right? Furthermore, the Commission should avoid adopting new requirements for defining “broadband” that would unnecessarily complicate the Commission’s analysis and hinder the proper assessment of broadband deployment Simply boosting a number to more accurately represent what is considered a high speed internet connection would "complicate" things how exactly? Oh, because now we couldn't compare the old bogus numbers to the new bogus numbers. for the sake of consistency and to ensure meaningful comparisons over time, the Commission should maintain a relatively stable benchmark for defining broadband, even if the Commission also sees a benefit of tracking the availability and adoption of higher-speed services Verizon also pulls AT&T's trick of claiming "well, people have slower connections, so that's proof that lower standards are fine." At the same time, the data confirm that services providing 4 Mbps/1 Mbps are still popular and meaningful to consumers. Meaningful? I wonder how the data concludes that. Next up, we've got NCTA, representing the cable companies, and it's (of course) of the opinion that it would be absurd to raise the rates, because, really, there isn't any good HD content online anyway: The Commission suggests that higher speeds may be needed to handle “super HD” video traffic, but even if true, given the limited presence of super HD video at this time, and the many other Internet services and functionality that can be easily accommodated with a 4/1 connection, there is no basis for finding that a connection must be able to handle one particular type of video in order to meet the definition of broadband. Yes, but perhaps the reason there isn't much super HD video is because your damn connections are too slow. Content follows bandwidth. If the FCC jacks up the standards, the broadband guys will ramp up their speeds, and watch the content flow... There are some other fun submissions, including CTIA, representing the wireless operators (which include Verizon and AT&T, of course) arguing that looking to the future is lame, man. We should base our broadband stats on historical usage: The Commission should analyze mobile broadband speeds in light of existing marketplace offerings Don't aspire to the future, let's settle for today's mediocrity. At least some folks are arguing for the change, including the Communications Workers of America, who probably realize that requiring higher speeds would likely lead to more work for its members. It's interesting to note that satellite internet providers are more than happy to support the FCC's higher standards, noting that those rates are easy to meet. Compare and contrast this statement to the whining from above: The FCC’s proposal to adopt a 10/1 Mbps speed benchmark represents a reasonable minimum threshold to ensure consumers in a “moderate use household” can satisfy their broadband internet access needs. Speeds of this level allow a “moderate use household” to stream videos, make VoIP phone calls, browse webpages, and check emails, which are the core broadband applications used by typical consumers. Consumer broadband satellite services provided by Hughes go as high as 15/2 Mbps and by ViaSat go as high as 12/3 Mbps, and they offer all of the above applications as part of their respective satellite services. That said, those satellite providers do then complain about including a "latency" component to the benchmarks, because satellite internet latency has always sucked. Public Knowledge went in the other direction, arguing that even 10 Mbps is too low and that the new standard should actually be 25 Mbps. Imagine the level of freakout from the legacy broadband players if that went through... Either way, upping the definition of what qualifies as broadband by the FCC would be a big step in more accurately reflecting the state of the broadband market in the US today, both from the standpoint of what kinds of speeds are really available and recognizing the lack of competition across the nation. The fact that it's scaring the traditional broadband players so much says an awful lot about how they've been able to hide behind the weak benchmarks in the past.Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
One of the biggest problems (hard to pick just one...) with Europe's "right to be forgotten" is that it completely fails to consider the fact that those being asked to forget won't take this sort of government intrusion kindly, especially when requests are more related to burying embarrassments than restoring privacy. It's been one backfire after another since the debacle began. Every notable request seems to be accompanied by a story about the removal, resulting in the generation of even more web content to be targeted for selective amnesia. Everyone wants to know who's asking to be forgotten and why. A FOI requester in the UK has asked the Dept. of Energy and Climate Change to provide this information on its own employees. Thank you for your email of 15 August 2014 where you requested the following information: - “How many times have Ministers in your department, or staff from your department on those Ministers’ behalf, applied to Google or other search engines to have links removed from searches under the ‘right to be forgotten’ following the EU ruling earlier this year? - Please break this figure down by minister. - Please detail the web pages run by your department which have been served a notice by Google that they will not be appearing in certain search results as a result of the right to be forgotten. - How many person-hours have been spent by your staff dealing with the ‘right to be forgotten’ (e.g. writing to or liaising with Google about removal requests)“. The request pits government transparency against government opacity -- although the latter has been generously extended to European citizens with the "right to be forgotten" law. Unsurprisingly, transparency loses, but only inasmuch as the new law ensures transparency will lose. To the extent that the “Right to be Forgotten” ruling provides that Individuals (in any capacity “official” or otherwise) have the right - under certain conditions - to ask search engines to remove links with personal information about them where the information is inaccurate, inadequate, irrelevant or excessive for the purposes of the data processing, the department holds no “official” information pursuant to your request. Certainly not the desired response, but the desired response would undermine the law, which the UK government really can't do. If UK citizens can ask privately to be forgotten, so can public officials. The wording in the response spins the law positively, which is also expected. The only thing the government could have responded to without undermining the law it has to enforce is total up the hours spent dealing with the paperwork. Presumably, similar requests have been made to other UK government agencies and presumably they will be greeted with similar responses. While this doesn't shed much light on how many government officials and employees are seeking to cleanse the net of information, we can be assured that any notable request will be duly noted by the entities being asked to forget. The UK government can't answer these questions without negating this (very questionable) right, but the private sector can still let the world know who wants what gone. One of the best checks against government power is the public itself -- even when it's citizens themselves who are using a bad law to request dubious removals. Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
We've frequently talked about law enforcement and the intelligence community accessing and making use of cell site location data, which looks to figure out where people are based on what cell towers they're connected to. Law enforcement likes to claim that it doesn't need a warrant for such data, while the NSA has tested a pilot program recording all such data, and says it has the legal authority to collect it, even if it's not currently doing so. However, as anyone with even a basic geometry education recognizes, which cell tower you're connected to does not give you a particularly exact location. It can be useful in putting someone in a specific (wide) area -- or, much more useful in detailing where someone is traveling over long distances as they repeatedly switch towers in a particular direction. But a single reading does not give you particularly exact location details. I had naturally assumed that most people understood this -- including law enforcement, lawyers, prosecutors and judges -- but it turns out they do not. A rather depressing story in The Economist notes that, thanks to this kind of ignorance (combined with bogus cop shows on TV that pretend cell site data is good for pinpointing locations), cell site location data is frequently used to convict innocent people. The story opens with a ridiculous example, in which a woman was pressured into a plea bargain based on totally false claims about tower location data: SOMEONE strangled a prostitute in Portland, Oregon in 2002. The police arrested Lisa Roberts, the victim’s ex-lover, who spent more than two years in custody awaiting trial. Shortly before the trial the prosecutor told Ms Roberts, via her lawyer, that tower data collected by Verizon, her mobile-telephone network, showed precisely where she was at the time of the murder. As her lawyer recalled, the prosecutor said Ms Roberts could be “pinpointed” in a park shortly before the victim’s naked and sexually assaulted corpse was found there. She was told she faced 25 years to life in prison. She accepted a deal to plead guilty and serve 15 years. But the high-tech evidence against her was bunk. Routinely collected tower data can place a mobile phone in a broad area, but it cannot “pinpoint” it. That would require a special three-tower “triangulation”, which cannot reveal past locations. It took a decade for Ms Roberts’s guilty plea to be thrown out. On May 28th she left prison, her criminal record clean, after nearly 12 years in custody. Obviously, things like GPS do allow for much more precise targeting of location (which may be why the NSA is focusing on that instead of cell site location data), but too many people confuse cell site location data with GPS. What's ridiculous is that this mistake isn't just being made by random people -- but prosecutors and lawyers responsible for criminal cases that can destroy an innocent person's life. This really points to a larger issue: people have this tendency to believe that technology can answer all questions. The NSA's fetishism of surveillance via technology is an example of this. There's data there, so it becomes all too tempting to assume that the data must answer any possible question (thus, the desire to collect so much of it). But the data and the interpretations it can lead to are often misleading or simply wrong. And that's especially true when dealing with newer technologies or forms of data collection. That the criminal justice system could go decades without everyone recognizing the basic geometric limits of cell site location data based on a single cell is... both astounding and depressing. But it's also a reminder that we shouldn't assume that just because some evidence comes from some new-fangled data source that it's automatically legitimate and accurate.Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
The skies are now that much safer [uses finger and thumb to approximate appropriately small amount] thanks to the super-serious safety efforts of the TSA. (via Amy Alkon) Ever vigilant, intellectually adept, and multi-talented (seeing as how they can spot stuff to steal even as they have their hands down your pants), they discovered the above pictured Big Scary Terroristy Thing at Mitchell Airport in Milwaukee. It is an “F Bomb Paperweight,” a piece of art handmade by Fred Conlon and selling for $45. Quoting from the F Bomb’s blurb: It’s never easy dropping truth bombs in the office. But “f” bombs? Always explosive fun! Fred Conlon’s recycled steel sculpture lightens up desk-side chats and tough conversations with a delightfully abstract expletive appropriate for any situation. Handmade in Utah. Each is one-of-a-kind and will vary slightly. How do we know the TSA managed to confiscate such a dangerous item? Because the TSA itself posted the photo above at its blog. A black novelty bomb was detected in a carry-on bag at Milwaukee (MKE). Accompanying the photo of the clearly-not-a-real-bomb is the following statement: We continue to find inert grenades and other weaponry on a weekly basis. Please keep in mind that if an item looks like a real bomb, grenade, mine, etc., it is prohibited. When these items are found at a checkpoint or in checked baggage, they can cause significant delays because the explosives detection professionals must resolve the alarm to determine the level of threat. Even if they are novelty items, you cannot bring them on a plane. "Looks like a real bomb." Yeah, about that… This looks about as real as any bomb ordered by Wile E. Coyote from ACME Products. The "fuse" appears to be recycled power lines, something no one could actually light. The TSA's internet mouthpiece, Blogger Bob, has previously complained that bombs are hard to detect because they don't look like their animated counterparts. “It’s not like they’re using a cartoonish bundle of dynamite with an alarm clock strapped to it,” Bob Burns of the TSA Blog Team posted on the agency’s Web site. He must be so relieved that someone actually walked into the Milwaukee airport with something cartoonish enough to be recognized as a bomb immediately by TSA staff -- which now looks more cartoonish than the "bomb" it confiscated. (Real bombs tend to go undetected...) Presumably, the dangerous item will be forwarded to the TSA confiscation dumping grounds where it can be sold to the highest bidder and put back into circulation. Too dangerous to put on a plane but not too dangerous to put back in the public's hands, where it might be carried onto a bus, subway car or aerial tram. The TSA doesn't mind if you hijack/blow up another form of mass transportation… just don't take down an airplane.Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
Europol is probably not very well-known outside the EU. Here's how it describes itself: Europol is the European Union's law enforcement agency whose main goal is to help achieve a safer Europe for the benefit of all EU citizens. We do this by assisting the European Union's Member States in their fight against serious international crime and terrorism. The emphasis is in the original. You may notice that it mentions Europe a few times, which underlines the fact that Europol is a European organization based in Europe, run by Europeans and serving Europeans. But the US seems to take a different view: The head of the EU police agency Europol is taking instructions from the Americans on what EU-drafted documents he can and cannot release to EU lawmakers. The story in the EUobserver quoted above explains: The issue came up over the summer when US ambassador to the EU Anthony Gardner told EU ombudsman Emily O'Reilly she cannot inspect an annual Europol report drafted by the agency's own internal data protection review board. And if you are thinking there might be some top-secret US information in that report, the Dutch MEP Sophie In't Veld says that isn't the case: "There is no operational information, there is no intelligence, there is nothing in the document. So you really wonder why it is kept a secret." The problem seems to be simply that the uppity Europeans dared to write their report without asking for US permission first: The Americans are unhappy because Europol had drafted the report "without prior written authorisation from the information owner (in this case the Treasury Department)." The fact that the Treasury Department thinks that it "owns" information about how the Terrorist Finance Tracking Program (TFTP) complies with European data protection laws is rather telling. No wonder that back in March, the European Parliament called for the TFTP to be suspended in the wake of revelations that the US was going outside the program, and accessing EU citizens' bank data illegally. The latest high-handed action by the US ambassador to the EU is unlikely to encourage them to change their mind. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
Under the European Copyright Directive, Member States may bring in an exception to copyright that allows works to be used without consent for the purposes of caricature, parody or pastiche. Following a long-drawn-out process, the UK will be doing exactly that, with effect from October 1. But a new judgment from Europe's highest court, the Court of Justice of the European Union, has added a new limitation to the parody exception (pdf). Here's the background to the case, as explained by the court's press release: At a reception held by the [Belgian] city of Ghent to celebrate the New Year, Mr Deckmyn, a member of the Vlaams Belang (a Flemish political party), handed out calendars for the year 2011. The cover page of those calendars featured a drawing which resembled that appearing on the cover of one of the Suske en Wiske -- known in English as Spike and Suzy -- comic books with the original title 'De Wilde Weldoener' (which may be rendered as 'The compulsive benefactor'), produced in 1961 by Willy Vandersteen. The original drawing represented an allegorical character in the series wearing a white tunic and surrounded by people trying to pick to pick up the coins he was scattering all around. In the drawing appearing on Mr Deckmyn's calendars, that character was replaced by the mayor of the city of Ghent, while the people picking up the coins were replaced by people wearing veils and people of colour. Several of Vandersteen's heirs and other holders of the rights to the comic book series brought an action against Deckmyn and the organization that financed the Vlaams Belang, claiming copyright infringement. These last two said that the calendar was satire, and therefore was covered by the EU's parody exception. The copyright holders asserted that parody must display originality, and that anyway the drawing conveyed a discriminatory message. Faced by all these claims, the Court of Appeal in Brussels asked the EU Court of Justice to clarify the conditions that a work must fulfill in order to be classified as parody. Here's the good news from the EU court's decision: A parody need not display an original character of its own, other than that of displaying noticeable differences with respect to the original work parodied. But there's less-good news in the form of this additional comment: The Court notes that the application of the exception for parody, established by the directive, must strike a fair balance between, on the one hand, the interests and rights of authors and other rightsholders and, on the other, the freedom of expression of the person who wishes to rely on that exception. In that context, the Court declares that, if a parody conveys a discriminatory message (for example, by replacing the original characters with people wearing veils and people of colour), the holders of the rights to the work parodied have, in principle, a legitimate interest in ensuring that their work is not associated with such a message. As is usual, the EU Court of Justice has passed the case back to the original Belgian court to apply its judgment. The latter will have to decide whether the parody in this case does indeed convey a discriminatory message, and whether the copyright holders can therefore require that the work is not "associated with such a message" -- which presumably means that they can insist that it is not distributed. What's problematic here is that, by its very nature, parody is pushing the boundaries of good taste; it's quite likely to use images that upset some people, and that are maybe borderline discriminatory in some way (whatever that means). The risk is that the rather vague ruling from the European court will encourage more legal action to be taken against works of parody, and for social and political commentary to suffer as a result. Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+Permalink | Comments | Email This Story

Read More...
posted 14 days ago on techdirt
The struggle to force the government to behave in a transparent fashion often runs through the FOIA process. When the government responds, it often takes out meaningful information by abusing FOIA exemptions. When the government doesn't respond, the "free" request becomes a rather expensive trip through the nation's courts. Even when the government responds, it may decide not to waive fees, leaving the requester to come up with anything from several hundred to several thousand dollars in order to see documents created with taxpayer funds by federal employees. Entities like MuckRock deal with this obstacle through crowdfunding. But not every requester has access to this sort of support. If the documents are delivered without full payment (some just require a first installment of a certain percentage), the government can come after you for the uncollected fees. But the government's collection efforts go beyond series of increasingly angry letters. According to information compiled by indispensable blog Unredacted, the government has the option to start docking your paycheck. In a letter to the FOIA Advisory Committee, Michael Ravnitzky points to an article at Washington-focused blog The Hill that indicates that some government agencies are willing to use this method to collect unpaid FOIA fees. [pdf link] I would like to bring the following issue to the Committee’s attention: application of Administrative Wage Garnishment to fees assessed for Freedom of Information Act requests. Federal agencies have begun exploring and instituting a new weapon to use against FOIA requesters: wage garnishment. Here is a link to an article that mentions two agencies: one that is implementing wage garnishment and one that has decided not to do so after receiving some unfavorable feedback. http://tinyurl.com/FeeGarnishment In this case, two agencies have already sought permission to use wage garnishment in FOIA cases for unpaid fees. A number of other agencies have established rules implementing the Administrative Wage Garnishment - AWG - provisions of the Debt Collection Improvement Act of 1996 - DCIA, but do not mention FOIA specifically. Other agencies are in the process of such rules, or are planning to add such rules. As he cautions, the use of this collection method will only further encourage onerous and abusive fees. Agencies often impose disproportionate fees that have the effect of deterring certain types of requests. For example, requesters frequently receive large fee letters without benefit of a preliminary call or note from the agency to discuss the possibility of a narrowed or more specified request, or to help clarify fee status. Agency staff often charge review fees to noncommercial requesters, despite the fact that such fees are inapplicable. Agency staff frequently seek to charge search fees to newsmedia requesters, again despite the fact that such fees are inapplicable. Noncommercial requesters are subject to search and review fees when responses are not provided within the statutory deadlines, even though the law precludes such fees, agencies asserting that all or nearly all the records requests they receive are subject to unusual and exceptional circumstances. Agencies even have imposed large page by page duplication fees, even when supplying electronic copies of records that already exist in electronic form. As Ravnitsky notes, this form of collection is particularly intrusive and can have adverse effects on requesters. For the citizen on the receiving end, this can adversely affect current and future employment, as well as possibly prevent them from obtaining housing or vehicles. For those already employed, it informs employers of little more than the fact that their employee owes the government money -- which implies all sorts of unseen dishonesty. Ravnitsky calls it the "nuclear option," one which certain agencies might deploy as further discouragement for future FOIA requests. Every government agency has many other options to resolve this issue (blocking of further requests and withholding of remaining responsive documents, to name a few) that this fee extraction method shouldn't even be on the table. The most disgusting aspect of this is that certain agencies (and I imagine there will be more who warm to the idea) feel entitled to take funds (well, additional funds) right out of citizens' paychecks to pay for documents created, stored and distributed by taxpayer-funded agencies and taxpayer-funded employees. This isn't like a federally-funded school loan where the government has spotted a member of the public the money to finish their education. This is the government extracting fees for information it won't release until asked and charging ridiculous amounts for it. The fact that this method is available to government agencies is its own chilling effect, running directly contrary to the spirit of the Freedom of Information Act.Permalink | Comments | Email This Story

Read More...