posted 4 days ago on gigaom
As China’s massive population adopts smartphones, Chinese smartphone makers stand much to gain. Take Huawei, for example. It has quietly become no. 3 on the list of worldwide smartphone vendors based on total sales and has seen revenues rise 19 percent so far this year, even as no. 1 Samsung is warning its investors of declining shipments. Huawei announced its mid-year performance on Monday, saying “Revenue and profit for the first half of 2014 are in line with our expectations.” It’s easy to think Huawei is simply growing because its a high-profile company in a country still ripe for large mobile device sales growth. Surely, that is a big part of the equation, but it’s not the whole story. The company noted that its flagship smartphone, the Ascend P7, is available in 70 countries. And last month, Huawei launched an online store in the U.S., catering to the up-and-coming unlocked phone market here: As carriers either abolish or provide alternatives to lengthy contracts and phone subsidies, the company wants to provide handsets to customers who want unlocked, contract-free phones. To that end, Huawei offers the Ascend Mate2, a 6-inch, $299 off-contract Android phone to U.S. consumers now, with additional models on the way. I’m currently using an Ascend Mate2 review unit — stay tuned for a full review — and find it’s generally a super value for the cost. Clearly, others around the world feel the same about Huawei’s smartphones, given that the company only trails Samsung and Apple when it comes to global sales.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.What mattered in mobile during the second quarter of 2014What first-quarter 2014 meant for the mobile spaceWhat the global tablet market will look like by 2017

Read More...
posted 4 days ago on gigaom
Texas EquuSearch, a nonprofit that says unmanned aircraft can do the work of 100 volunteers, will resume the use of drones in missing-person searches following a court ruled that an FAA email had “no legal consequences.” The search-and-rescue group had previously used five-pound drones made of foam and plastic to survey rough terrain, but stopped doing so earlier this year following a warning by the federal aviation regulator. Texas EquuSearch challenged the warning before the U.S. Court of Appeals for the District of Columbia, where it also claimed that the FAA has for years “harassed and interfered” with its members “before, after and during search and rescue activities.” The three-judge panel wrote in a short order that it would dismiss the case because the email did “not represent the consummation of the agency’s decision making process, nor did it give rise to any legal consequences.” The ruling (posted below) amounts to a technicality — the court basically said Texas EquuSearch’s lawsuit was premature — but it does for now lift a current legal cloud over the nonprofit. ”Texas EquuSearch is free to resume its humanitarian use of drones … Therefore, the organization and its volunteers plan to resume their use of this life-saving technology immediately,” said lawyer Brendan Schulmann by email. The ruling is also likely to draw renewed attention to the FAA’s predicament when it comes to addressing the growing popularity of unmanned aircraft, which have the potential to transform a range of industries from farming to news photography to movies. While the FAA has declared that nearly every commercial use of drones is banned, the agency does not appear to have the legal authority to impose such a measure. The issue is that, while Congress has instructed the FAA to develop policies, it is behind schedule in doing so — and is instead relying on policy statements that don’t have the same force as regulations. The next big test for the FAA will come in a case over a commercial photographer in Virginia, where the FAA is appealing a lower ruling that it had no authority to impose a $10,000 fine. The current law over civilian drones is not a free-for-all, however. The FAA does possess clear authority to regulate them near airports, while state and city authorities can also enforce privacy and public order laws against people who use drones irresponsibly — like a New York man who flew one near hospital windows last week. Texas EquuSearch

Read More...
posted 4 days ago on gigaom
Imagine you’re watching TV, and the ad break begins. You could just flip to the next channel — but chances are, ads are playing there as well. Now what if your TV’s program guide could tell you which of your favorite channels are having an ad break at any given time? Beamly, the social TV startup previously known as Zeebox, is working on such a guide, and it wants to bring it to TV sets everywhere, thanks to an app for Google’s new Android TV platform that Beamly is set to announce Monday. It’s the first time the company has targeted TV sets after producing second-screen apps for Android and iOS as well as the web, but Beamly CTO and co-founder Anthony Rose told me during an interview last week that the jump to the big screen had long been on the company’s roadmap. “We always imagined the experience built into the TV,” he said, adding that the company even built an early prototype based on Samsung’s smart TV platform years ago. However, the TV ecosystem just wasn’t ready, which is why Beamly initially just focused on mobile apps. The second screen was simply where innovation was able to happen until TVs were ready to catch up, said Rose. And with Google’s newly-introduced Android TV platform, that moment may finally be here, he argued. Rose’s team has now built a first prototype of a Beamly app on Android TV, which is basically a TV guide that incorporates personalization, social and interactivity. Beamly’s first-screen app is capable of interacting with Beamly’s existing mobile apps, and a key part of the experience will be a personalized TV channel based on past viewing habits as well as a user’s Beamly profile. The idea of this channel is to play programs in a TV-like fashion, but with content coming from different sources, including a variety of live TV channels and online video services. “That’s what people expect from TV. It just plays,” said Rose. The Beamly TV app will also be able to offer interactive TV experiences, like the option to use the second screen vote on contestants in a competitive reality TV show, and Rose is also thinking about offering social and entertainment content when nothing is on TV, effectively turning the big screen into a social wall for the living room. “At the moment, TVs aren’t doing anything when they are not playing TV. But they could be,” he said. Beamly is looking to partner with consumer electronics manufacturers to distribute the app, and Rose said that he could help hardware makers with the integration of live TV into the Android TV framework. The result could also be a white-labeled app that incorporates a subset of features based on the specific needs of a TV maker, conceded Rose, but he said that Beamly definitely wants to keep its own brand on its second-screen app. “Beamly is and remains a Beamly-branded consumer proposition,” he said. Beamly isn’t the only company that’s coming out of the mobile TV apps pace and is now looking to replace the plain old grid guide on the TV screen. Boxfish, which first debuted its TV guide on mobile devices as well, has been squarely focused on helping consumer electronics manufacturers like Samsung add contextual smarts to their TV guide products. And Fan, which was previously known as Fanhattan, even built its own Android-based TV box, which it’s now selling to Time Warner Cable customers. I asked Rose about the latter, and wanted to know if he ever contemplated going down that path as well. “We absolutely thought about building our own box – for about 15 minutes,” he joked, adding that Beamly is instead trying to get the software part right. “Both hardware and software can be hard spaces to be in,” he said. That’s true — and even more so when you’re building software for a platform like Android TV that doesn’t even have hardware in the market yet. But in the face of increasing social TV consolidation and an ever-increasing engagement of Facebook and Twitter, taking the jump to the first screen nonetheless seems a smart move for Beamly.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How mobile will disrupt the living room in 2014How to take advantage of the second screenHow Netflix can fix its big mistake

Read More...
posted 4 days ago on gigaom
High-performance computing (HPC) allows users to solve complex science, engineering and business problems using applications that require a large amount of computational resources,as well as high throughput and predictable latency networking. Most systems providing HPC platforms are shared among many users and constitute a significant capital investment to build, tune and maintain. Amazon Web Services (AWS), using Intel® Xeon® processors, enables you to allocate compute capacity on demand without up-front planning of data center, network and server infrastructure. You have access to a broad range of cloud-based instance types to meet your demands for CPU, memory, local disk and network connectivity. Run infrastructure in any of a large number of global regions and avoid lead times for contract negotiation and a local presence. Read more.

Read More...
posted 4 days ago on gigaom
Waiting for a retina display update on Apple’s MacBook Air laptop? If the Economic Daily News out of Taiwan is correct, you won’t have to wait much longer: The business-focused site reported over the weekend that parts for a refreshed MacBook Air with a higher resolution screen are shipping to Apple’s manufacturing partners now. Apple Insider notes that parts shipments now combined with production of the laptops starting in August indicate October as a possible launch month for the revamped MacBook Air line. Aside from some changes in the chip that runs inside the laptop, the current MacBook Air computers are much the same as they’ve been since the late 2010 update. Adding a retina display would obviously change that but it may not be the only difference: KGI analyst Ming-Chi Kuo suggested in April that Apple would introduce a 12-inch MacBook Air in the third quarter of this year, fitting between the two currently available models. According to the Economic Daily News, that potential product may be delayed until later, possibly until 2015, due to a shortage of the chips needed to run it.  Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Gigaom Research predictions for 2014What mattered in mobile during the second quarter of 2014Connected-consumer second-quarter 2014: analysis and outlook

Read More...
posted 4 days ago on gigaom
Microsoft has long relied on an army of outside contractors to keep things running – but that use will be curtailed going forward, according to an internal memo unearthed by Geekwire. For “contingent workers” — including contractors, consultants and other external staff — typically people who work without benefits — new rules will apply. For example, after their 18-month stints, they will be cut off from access to Microsoft’s facilities and network for 6 months, according to the memo and FAQ which Geekwire prints in full. Microsoft’s use of outside contractors has been controversial. Many sued the company years ago, claiming they were true employees in everything except name and were entitled to social security withholding and other benefits. The news of restrictions leaked just days after Microsoft said it will cut 18,000 jobs – or 14 percent of its workforce — in the coming year. This is the company’s largest cutback, by far, in its 40-year history. This is a key part of CEO Satya Nadella’s attempt to get the company down to fighting weight.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.What mattered in mobile during the second quarter of 2014What mattered in cloud in the second quarter of 2014How streaming data and machine learning impact the bottom line

Read More...
posted 4 days ago on gigaom
Drive about 45 minutes southeast of downtown Pittsburgh, out to the edge of Westmoreland County, and you’ll reach a sprawling, cavernous factory whose history shadows the ebb and flow of technology trends and American manufacturing. In the late 1960s, Chrysler, fresh off the American muscle car wave, started building the plant, but as the seventies approached and the price of oil rose, it suspended construction without making a single car. Fast forward a decade and German car company Volkswagen stepped in — rolling the boxy Rabbit off the line — but that era was hit with a lethal combo of worker unrest, awkward car designs and dropping oil prices that took the sheen off Volkswagen’s small-car edge. The 2.3-million-square-foot factory in Westmoreland County, PA. Image courtesy of Katie Fehrenbacher, Gigaom. In the nineties, it was Japanese giant Sony’s turn and the company used the site to make rear-projection televisions, but a few years later LCD and plasma screens made that tech obsolete. As low-cost production moved to Asia, Sony’s plans for most of its U.S. factories were canned. It’s as if the two-million-square-foot factory that sits in the city of Mount Pleasant has always been home to a series of downward-trending tech cycles. But perhaps this year will see the end of that industrial version of Groundhog Day. The factory, owned by the state and now leased to a variety of tenants, is the brand-new home to a company that could be a leader in the emerging market for low-cost batteries that can be plugged into the power grid and paired with solar panels. Aquion Energy’s new factory in Westmoreland County, PA. Image courtesy of Katie Fehrenbacher,Gigaom. The new tenant is young battery startup Aquion Energy, which has set up shop in a small section of the huge factory. It’s churning out ultra-simple, low-cost and non-toxic batteries made from a combination of salt water, carbon and manganese oxide. Aquion is farther head than most of its competitors, many of which are still in the R&D and prototyping stage: It’s made, and is in the process of shipping, about 2 MWh worth of batteries to customers since the beginning of 2014 and plans to ship several more multiple MWh this year. In terms of individual battery units, that means it will ship somewhere between 5,000 and 10,000 batteries this year. The energy storage market will potentially be worth some tens of billions of dollars in the coming years and Aquion’s factory is the first of its kind at this scale. Solar companies are starting to install battery banks next to solar farms so that the batteries can store solar energy during the day to be used after the sun goes down. Remote communities are beginning to pair batteries and solar panels to disconnect from the grid. Down the road, utilities will more commonly buy these types of batteries to better manage the grid. On a tour of the factory last week, Jay Whitacre, Aquion Energy founder and CTO — who invented the chemistry used in these batteries when he was a Professor of materials science at Carnegie Mellon University — showed me the first installed manufacturing line and walked me through the process of how an Aquion Energy battery gets made. Along the way, he could barely contain his excitement over the fact that a venture that was once just an idea in his head is now shipping product and bringing in revenue. Aquion Energy founder and CTO Jay Whitacre inspects a battery that’s ready to ship to a customer. He invented the battery chemistry at Carnegie Mellon University.  Image courtesy of Katie Fehrenbacher, Gigaom. From idea to product Whitacre started investigating the battery tech in 2007 in his Carnegie Mellon lab, using a rigorous “economic-first” analysis. The energy industry is “dominated by economics,” and any energy storage battery product has to make the economics work first and foremost, explains Whitacre. Lead acid batteries, which have been on the market for decades, are relatively inexpensive but degrade fairly quickly. Their energy density (the amount of energy they can store) is relatively low, they don’t operate very well under hot temperatures and — of course — they contain lead. Still, lead acid batteries are commonly used in off-grid solar systems. Lithium ion batteries are starting to be used more frequently for the power grid. They provide much more energy density than lead acid batteries, but historically they’ve been pretty expensive, and also don’t last that long without degrading. Electric car maker Tesla says it can lower the price of lithium ion batteries significantly through its massive battery factory, but whether that’s true remains to be seen. Despite being widely available, neither lead acid nor lithium ion batteries appear to be a great fit for the power grid and solar panels. In particular, they’re not all that great at storing solar energy from a solar farm. Lithium ion batteries might be pretty good at moving a vehicle, using high power and providing short, shallow bursts of energy, but clean power applications generally need several hours’ worth of sustained, lower-power energy. The energy storage industry needed an entirely new way of looking at the problem. Aquion Energy founder Jay Whitacre explains the architecture of the battery. Image courtesy of Katie Fehrenbacher, Gigaom. Whitacre began testing combinations of low-cost materials and simple battery designs in the hopes of coming up with a product that would be as cheap as possible, easy to manufacture and able to be operated for a long time without degrading at any temperature. He threw out materials that didn’t work and sought to “fail fast” with his iterations. Early on he met David Wells, a partner with Valley venture firm Kleiner Perkins, who told him something like, “If you ever see great results, let me know.” About a year later, Whitacre came up with a promising combination and Wells, true to his word, led Kleiner to incubate the company in its early life. Around that time Ted Wiley, who is now Aquion’s now VP of product and corporate strategy, was fresh out of business school and began working on a field study of the battery tech for Kleiner Perkins. Whitacre ultimately asked Wiley to join him as Aquion’s first employee. Wiley says joining the company early on was “total luck.” He ran its operations for the first two years and led the spin-out of the company from Carnegie Mellon. The rise and fall of cleantech in Silicon Valley At first, venture capital funding for Aquion was readily available. With Kleiner in early, and the cleantech Valley bubble inflating between 2008 and 2011, Whitacre says he saw a surge of attention: “At the time I didn’t really understand what drove all the interest from Silicon Valley, but I was happy to take the money.” Following Kleiner Perkins’ early rounds and a small amount of Department of Energy funding, Valley firm Foundation Capital led a $30 million round in 2011. The round, which also included Advanced Technology Ventures and TriplePoint Capital, was oversubscribed, says Whitacre. But by 2012, Silicon Valley sentiment around cleantech had started to sour. The term and industry had become politicized, there had been a series of high- profile Valley-backed bankruptcies like Solyndra and Fisker, and many venture capitalists ended up losing money and faith in the sector. Today, venture funding for cleantech startups is below what it was during the bubble years. Solyndra’s groundbreaking ceremony in 2009, featuring a live video feed of Vice President Joe Biden. Image courtesy of Katie Fehrenbacher, Gigaom. Aquion needed more funds to continue to grow its business and to move into manufacturing. The company wanted to start commercializing its tech in 2014 and 2015 in order to capitalize on a growing energy storage market and get to market faster than competitors. Complicating the difficult funding environment further, Aquion had recently switched its anode-materials blend to a higher-energy, better-performing one. That was great, but if there’s one thing all investors worry about, it’s technology uncertainty and risk. “Early 2013 was tough. There was an about-face in the investing community. They realized they needed to be more cautious and that this sector can have longer-term ventures,” says Whitacre. In 2013, Aquion Energy didn’t target new Valley investors. It instead closed funds from family offices and international investors. Tao Invest — the fund of billionaire family the Prizkers, who also own Hyatt hotels – joined. Hong Kong–based fund Yung’s Enterprise came in, as did Russian firm Bright Capital. Aquion also raised money from high-profile billionaire and Microsoft co-founder Bill Gates, who has backed other battery startups, too. That round “was a huge deal for us. I don’t know where we would be without it,” says Whitacre. In all, Aquion has raised over $100 million to get its batteries to market. Aquion Energy founder Jay Whitacre standing next to a battery stack. Image courtesy of Katie Fehrenbacher, Gigaom. Aquion is now selling its first battery stack product, the S-10, for $850 per stack (2 kWh each). Seven or eight battery units make up a stack. Twelve stacks make up a module, which runs for around $11,000. At those prices out of the gate, Aquion is selling its batteries for below $500 per kWh — on par with lead acid batteries, but they last longer without degrading and are guaranteed for at least 3,000 cycles. If the batteries are charged and discharged, say, once a day, they should last for more than eight years. Those prices are just the beginning. Aquion’s goal is to drop its prices below $350 per kWh by the end of 2015 and to make them progressively cheaper after that, getting the cost under $200 per kWh by 2020. At those prices, Aquion could see a whole new market open up for utility-scale power grid management. Right now, most customers are buying the batteries for offgrid solar and are willing to pay the higher prices partly because they want to be among the first to use the tech. Battery stacks and modules in Aquion Energy’s factory. Image courtesy of Katie Fehrenbacher, Gigaom. Making the battery: Behind the scenes The Aquion battery’s secret sauce is its electrode blend. Traditionally, a battery is made up of a positive electrode, a negative electrode and an electrolyte that sits in the middle and shuttles ions between the two electrodes during charging. Aquion uses a dry manganese oxide powder for the positive electrode and a dry carbon powder for the negative one. Saltwater fills the battery to conduce the charging and discharging. On some of the assembled battery units I checked out, you can actually see the dried salt crystals on the outside of the packaging. Material powders that go into Aquion’s electrodes. Image courtesy of Katie Fehrenbacher, Gigaom At one end of the factory sit stuffed sacks of powdered materials, like these shown above. On a floor above the main factory, Aquion workers mix together the electrode blend. The powders are then stamped into dry pellets that look like square hockey pucks made of pencil tips. The “hockey pucks” come off the assembly line and are assembled into the battery module. The powders get stamped into dry pellets. Image courtesy of Katie Fehrenbacher, Gigaom The pellets are smooth to the touch, lightweight and leave a slight dark residue on your fingers when you pick them up, like the way a pencil tip does. Machines that pick and place the electrodes into the battery units. Image courtesy of Katie Fehrenbacher, Gigaom. Once the electrodes are made, a machine picks them up and puts them in the right place to be assembled into a battery unit. It’s the same type of machine that puts chocolates into those heart-shaped Valentine’s Day boxes. When the battery casing is filled, it looks like this (this is one that was put aside because the metal closure was tweaked): Inside of an Aquion Energy battery, showing the cathode and anode pairs. Image courtesy of Katie Fehrenbacher, Gigaom. Once the battery’s electrodes and separators are fully assembled, it’s filled with saltwater. Then it’s basically done, closed up and can be stacked with 7 or 8 more batteries. An Aquion Energy battery unit. Image courtesy of Katie Fehrenbacher, Gigaom. The batteries are relatively heavy once they’re fully assembled. I could pick one up, but I wouldn’t want to carry it for a long distance. Remember they’re filled with saltwater. Battery units stacked up on the metal rods. Image courtesy of Katie Fehrenbacher, Gigaom. The modules get computing and software units that Aquion Energy is developing in-house. The market is growing for battery software, developed by startups and big companies alike. Aquion is also working with other integrators that make software, like Princeton Power Systems. After the computing top, the battery gets an Aquion-branded cap. Computing units for Aquion Energy battery modules. Image courtesy of Katie Fehrenbacher, Gigaom. All of the battery stacks and units are tested before going out the door. In hot rooms, for instance, the batteries are tested operating at 40 and 50 degrees Celsius (104 degrees F and 122 degrees F). One of the benefits of the Aquion battery is that it can run just fine in a hot environment, like a super sunny solar field. Battery modules being tested at Aquion’s factory. Image courtesy of Katie Fehrenbacher, Gigaom. Currently, Aquion is running one manufacturing line and can make 200 MWh worth of batteries per year. It can produce 1 to 2 battery units a minute and already there are batteries in the queue ready to be shipped. Batteries ready to ship at Aquion Energy’s factory. Image courtesy of Katie Fehrenbacher, Gigaom. Down the road, Aquion plans to expand production to five battery lines, which will be able to make over a gigawatt hour of batteries per year. Though Aquion is charging ahead with commercial manufacturing, it won’t be a large-scale factory for awhile. And of course, despite all of the good intentions and hard work already done, a lot can go wrong when it comes to scaling up this type of manufacturing. No doubt there will be future hurdles that Aquion will have to overcome. The ultimate test of Aquion’s success will come from its customers, particularly the early ones. This first set of customers is willing to pay initially higher prices for the chance to use a new and exciting technology. And they are primarily using the batteries for offgrid solar projects. Down the road, when the batteries are cheaper, utilities and grid management will be the bigger fish to catch. While Aquion has a way to go before it can scale up enough to change the game for solar and grid power, it’s an example of an emerging technology that’s at the beginning of a transformational change in the energy industry. It’s not a company that’s riding a wave of a fading tech trend. If anything, it could be too early. But I’m predicting that it’s going to be inhabiting that Pennsylvania factory for a long time, employing local workers and developing an entirely brand new type of American manufacturing.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Why renewable energy and the smart home ruled cleantech in the second quarterA market analysis of emerging technology interfacesApplying lean startup theory in large enterprises

Read More...
posted 4 days ago on gigaom
FCC Chairman Tom Wheeler clearly wants to protect communities from state intrusion by having the legislative barriers to public-owned networks in 19 states removed or heavily curtailed. Those who see high-speed internet services strengthening local economies, transforming medical and healthcare delivery, improving education and increasing local government efficiency agree with him. How would removing these walls to progress not only impact states with public network restrictions as well as other states? Community broadband history indicates this would unleash competitive forces so that, according to Massachusetts Senator Ed Markey, “prices go down dramatically. All of a sudden, the two private sector incumbents find a way to lower prices.” Constituents in urban as well as rural communities also would get much faster speeds. Community broadband success breeds success Over 400 public-owned networks operate in the United States, according to the Institute of Local Self-Reliance, including 89 fiber and 74 cable community-wide networks, and over 180 partial-reach fiber networks covering business districts, industrial parks and medical and university campuses. Evaluating these networks’ impact on job creation, education and stirring innovation, as well as their financial sustainability, uncover hundreds of success stories that can be replicated once the barriers in those 19 states drop. Some networks such as those in Cedar Falls, Iowa; Thomasville, Georgia; Santa Monica, California and Bristol, Virginia have operated successfully for over 10 years. Thomasville Mayor Max Beverly credited, its 14-year-old network for profits of $2 million a year and has contributed to the city eliminating taxes. Danville, Virginia’s public utility’s network that launched in 2004 helped cut the locale’s unemployment in half, down from 19 percent, by directly enticing several large companies to the area, and creating a local technology industry that otherwise likely wouldn’t exist. Santa Monica’s fiber network, launched the same year, reduced government voice and data communication charges by over $750,000 a year. Those savings, plus selling fiber services to local businesses helped build a $2.5 million surplus. However, community networks’ return on investment often is not about revenue, but benefiting the public good. Prestonburg, Kentucky, for example, built a municipal wireless network in 2008 for its 3,255 citizens. Brent Graden, the city’s former Director of Economic Development, stated, “We have folks in who live in pretty remote areas we call hallows who hadn’t seen a doctor in years, particularly specialists, because it’s so much trouble and expense to get to an office or hospital. Those folks use videoconferencing over the network to enable doctor consultations.” Dismantling legislative barriers would accelerate the number of interstate projects such a Chattanooga incubator that is using the public utility’s (EPB) gig network to link with the University of Texas at Dallas’ gig network to collaborate on a 3-D printing project. Tennessee’s anti-muni network law currently prevents EPB from expanding its service to nearby communities desperate to open similar mutually beneficial opportunities with states not shackled by these laws. Private-sector failings drive the need to remove barriers While anti-muni network laws, incumbents’ lawsuits and predatory marketing have caused many public networks such as those in Jackson, Tennessee and Reedsburg, Wisconsin to struggle early on, and a small handful failed, it is the private-sector failures are stunning. “Between 1993 and 2013, large telephone and cable companies collected over $380 billion dollars in rate increases, tax breaks, changes in depreciation schedules for upgrades and other perks,” states Bruce Kushnick, Executive Director of New Networks Institute, a market research and consulting firm. To win these concessions, teams of incumbents’ lobbyists promised practically every state they’d deliver 45 Mbps of symmetrical bandwidth to tens of thousands of homes in each state. In a report titled “The History, Financial Commitments and Outcomes of Fiber Optic Broadband Deployment in America: 1990 – 2004,” Kushnick cataloged from public records promises made and not kept. Bell Atlantic promised by 2010, 100 percent of New Jersey should be able to receive services capable of 45 Mbps symmetrical. It promised in Pennsylvania 100 percent of its access lines in each of its rural, suburban, and urban rate centers would be broadband capable by the end of 2015. Pacific Bell’s “California First” plan called for 5.5 million homes in this states to be wired with 45 Mbps symmetrical by 2000. Seeing this pattern of broken promises that were costing states billions, frustrated communities began launching public networks. In response, states began passing laws restricting public networks. Exposing this as an anti-competition tactic is easy. Just follow the money. The four legislators sponsoring the anti-muni network bill that passed in North Carolina in 2011, for example, received an average of $9,438 from 2010-2011 from incumbents. This is more than double the $3,658 received on average by those who did not sponsor the bill. A detailed analysis connects the many dots and dollars connecting incumbents and those who voted for this bill. U.S. Congressional Rep. Marsha Blackburn of Tennessee led a bill that passed this week to thwart efforts by Wheeler to rescind laws such as North Carolina’s. Research reveals two of Blackburn’s largest career donors are AT&T ($66,750) and Comcast ($36,600), two of EPB’s (Chattanooga) big competitors. With incumbents failing to deliver even a few megs of speed in many areas while each day public entities are announcing gigabit networks, broadband champions are demanding the FCC defend the true free market against state intrusion. In their view, if 10,000 people and businesses in a community spend $1 million a month for broadband services, those constituents are the broadband market. If that market isn’t satisfied, they have a right to vote with their personal dollars for a better solution. They also have a right to vote to spend their publicly generated dollars for a better solution, including allowing their local governments to run the networks. Community broadband advocates strongly believe these anti-competition walls that incumbents have built through restrictive laws will fall. As more cities build their own networks and repeatedly prove them successful, these walls will have trouble withstanding pressures such as  the need to reverse bad economic conditions, to better compete in innovation or to dramatically improve education. Craig Settles is a consultant who helps organizations develop broadband strategies, host of radio talk show Gigabit Nation and a broadband industry analyst. Follow him on Twitter (@cjsettles) or via his blog.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.What first-quarter 2014 meant for the mobile spaceWhat mattered in mobile during the second quarter of 2014Connected-consumer second-quarter 2014: analysis and outlook

Read More...
posted 4 days ago on gigaom
When it comes to software and cloud services, the concept of lock-in has a heightened meaning. Vendor lock-in exists in all industries, but no where does it take you more by surprise than in software and cloud services. Much like when selecting a house renovation contractor, there’s a hidden penalty you pay both operationally and financially. Over the past 15 years, I’ve served as CEO at several software companies, including MySQL and Eucalyptus Systems, where I am today. During this time, I have seen organizations large and small battle with lock-in. The cure they found was open source reference implementations of a widely accepted standard. What to a customer first looked like an exciting new piece of software — easy to try, no strings attached — soon infests the organization and isn’t quite as easy to remove. That’s the definition of lock-in: A decision that later proves costly or impossible to get out of.  It’s ironic that the ease of adoption of open source can lead exactly to this situation. To understand lock-in, there are three main things to know: Lock-in happens with your own designs as much as it happens with vendor-provided offerings Buyers may publicly decry lock-in, but like complaints about death and taxes, few can avoid it Agility is the opposite of lock-in The first point is about the nature of lock-in. Vendor lock-in is just one form. But, if you customize the software you use or if you produce your own “glue code” to integrate disparate pieces of your infrastructure, you are locking yourself into an architecture of your own making. Because of the ongoing efforts required to maintain these systems, this will be costlier than any other form of lock-in. There is a way to minimize both types of lock-in. Just think of how you avoid each type on its own. You avoid vendor lock-in by using open source software. You avoid design lock-in by using standard software components with industry-proven interfaces. By using industry-standard open source software products, you reduce your lock-in down to an absolute minimum. You can always choose to self-support. You don’t require an ongoing financial relationship with a vendor to continue to use the software. And because you chose a product and not a project, you also avoided ending up with design lock-in. This is what Google and other leading vendors are doing, and it explains the enormous popularity of Linux, JBoss, SQLite, MySQL and other open source products (as opposed to projects). You know that you are not dependent on just one vendor, and you know that you don’t have to customize the product for your own needs. The product works as expected out of the box. The second point about lock-in is that risk-averse decision-makers actually don’t mind it. If you have staffed your organization with leaders who are there to protect an existing business or safeguard some company asset, you can bet that they will always recommend sticking to those expensive commercial licenses. Publicly they may complain about lock-in, but in practice they will ignore the cost savings and enormous scalability benefits of modern open source software. They have a vested interest in status quo, and because the annual increase in software costs doesn’t upset the CFO too much, no change in course will be taken. The third point is that while maintaining the status quo is rational, it’s also locking you out of further innovation and potential competitive advantage. By locking into the status quo, you are shutting the door to new experimentation and learning. There is little incentive for your team to try new technologies since you’ve effectively communicated an unwillingness to try new things. What seemed like a good risk mitigation strategy now leaves you blind to improvements. You’ve inadvertently institutionalized resistance to change. The only way out of this predicament is to consider the opposite of lock-in: agility. Agility is the ability to make changes (useful ones, we hope!) without much pre-planning and without rocking the boat. Agility is the ability to go from idea to experimentation very quickly, and to go from experimentation to actual deployment equally quickly. You can increase agility in your organization by lowering the cost of experimentation reducing lock-in of all types splitting decisions into smaller decisions reducing organizational latency, i.e. reducing the time it takes to get a response or a decision Consider a private cloud infrastructure that allows for quick and inexpensive experimentation. Use standard open source products to avoid lock-in of all types. Divide projects into smaller interoperable parts, and delegate decision-making to the project managers. Measure managers by how quickly they make decisions and how they enable their teams to experiment and innovate. With the above list, it becomes clear that lock-in is actually one of several opposites of agility, not the only one. You won’t become agile just by removing lock-in. But by not removing lock-in, you most certainly will not be agile. Choose open source products that follow an established standard. Do not customize. Maintain your freedom, and strive for agility. That’s true avoidance of lock-in. It leads to innovation and competitive advantage. Mårten Mickos is CEO of Eucalyptus Systems, provider of an AWS-compatible private cloud software platform. Previously, he was CEO of MySQL AB. Photo courtesy of Shutterstock user Sergey Nivens.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How OpenStack can become an enterprise reality.Why design is key for future hardware innovationHow the mega data center is changing the hardware and data center markets

Read More...
posted 4 days ago on gigaom
It’s time to admit it: I’ve been an Android user for the past two years. That might seem surprising since when I first started freelancing for Gigaom, I covered only Apple products. While some posts where an Apple user tries Android (or vice versa) end up evincing as much enthusiasm as that of a toddler being asked to try Brussels sprouts, this isn’t one of them – there are things I like about Android, just as I do iOS. While I doubt I’m going to go the Full Ihnatko, I can easily picture always owning an Android device. I still use an iPhone as well as an iPad 3rd generation. Why I got a Nexus 7 I got the Nexus 7 (2012 edition) for two reasons: even though I write about Apple products, I felt I needed to be up to speed on Android, and at the time the iPad mini wasn’t available and I liked the screen size. I also felt that since the Nexus 7 was a Google-branded device, I would get the pure Android experience, not a forked or bloated version. This was a good decision. While the Nexus 7 isn’t my most-used tablet (that would be the iPad), I’ve had a very positive experience with it. I think even if the iPad mini was out when I got the Nexus 7, because of the better screen I would still have ended up with the Nexus 7. The Nexus 7 is part of my daily carry and is used frequently. In some ways, I find Android superior to iOS At a certain level, I’ve always been platform agnostic. I’ve long believed that devices are simply tools to get a job done. I’ve tried to take steps to eliminate device and OS lock-in so if a better platform for my needs comes along, I can at least migrate most of my data. I’m running Android 4.4.4 and I like it. A few weeks ago I was on vacation in Nantucket and wanted to see how close the ferry was to the island, and my iPhone wouldn’t render the map correctly. Because the Nexus 7’s GPS chip doesn’t require a cellular or Wi-Fi connection, I was able to use GPS Copilot to see where we were. I know that’s not strictly an Android feature, but it’s one area I feel the Nexus 7 is better than my iPad. I love the inter-app sharing on Android. If I have Pocket or Evernote installed, adding a web page to these apps is easily done via the Share menu. While this is coming to iOS 8 with extensions, I’m not in the beta for any apps that support it, so I’m still not sure how it will also work in practice. A web-based store when I can buy an app without needing to launch an app first is great. I often look for an app on my work laptop and it’s very easy to find and install apps that way. The way Android handles the home screen is wonderful. I love not having to play a tile game if I want to move an app to a specific part of my home screen. I wish iOS had a way to tap and see an alphabetical listing of all apps on my device. Those are a few ways that I feel Android is superior to iOS. I have no doubt the comments section of this post will be filled with other ways Android is superior to iOS. I feel that Kit Kat was a great OS that is enjoyable to use. Why I don’t use Android as my main device While I try to make it easy to switch platforms, it’s pretty much impossible to avoid some sort of lock-in. I try to buy my ebooks from Amazon and my magazines from Zinio, and I try to keep my files in a service like Dropbox, OneDrive, or Google Drive. This way, if I decide to switch away from iOS, I can get my data. Last year, I mentally put Apple on notice. While iOS 7 was the re-skin I was expecting, and a necessary step to move the platform forward, what Apple announced this year for iOS and will (hopefully) announce for the iPhone were important to keep me as a customer. There are also a lot of conveniences (such as SMS and phone calls across all devices) if all of your devices are Apple devices. The other reason I still use iOS is app selection. I’m a musician who uses his iOS devices a lot for music creation and the app I use (JamUP) isn’t available on Android. I use Apogee devices as well, and those also do not work with Android. What’s ahead for me and Android Let’s assume for that Apple releases a larger iPhone, which seems to be a pretty safe guess at this point. I still need a smaller tablet. I can throw the Nexus 7 in the front pocket of my work pants and go read at lunch. The 32 GB  iPad mini with Retina display, at $499, is a hunk of change. I can’t afford that, especially for a secondary tablet. That brings me to the rumored Nexus 8 tablet. There’s not much known about it — we don’t have details like specifications, price and release date. I am very curious about Android L. Since my Nexus 7 (2012) is a little slow with Kit Kat and the Nexus devices are usually cheaper than the Apple tablets, a Nexus 8 would fill my small tablet need quite well.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How new devices, networks, and consumer habits will change the web experienceWhat mattered in mobile during the second quarter of 2014What the global tablet market will look like by 2017

Read More...
posted 4 days ago on gigaom
Amazon Web Services, which perfected its pitch to developers long ago, is adding more enterprise-grade features and services to attract — or at least not repel — IT professionals. The latest example was last week’s news that AWS Marketplace is now offering a some enterprise applications  via annual subscriptions. Before now, buyers had to pay by the hour or the month.  Granted, the 90 applications now offered this week do not include some biggies — no Oracle or Microsoft or SAP stuff is included. But if you’re in the buying mood for MicroStrategy’s analytics enterprise or Brocade’s  Vyatta vRouter or Citrix Netscaler VX, you can source it by the year. And Apple, which has seen Android phones steadily eroding iPhone market share, apparently decided its time to ditch its historical contempt for IBM — many stories pulled up the old photo of Steve Jobs flipping the bird tin front of an IBM building — and to join forces with it. In a joint deal, the companies said IBM will act as Apple Care support in enterprise accounts, and will work with Apple to develop more than 100 enterprise vertical apps for iPhones and iPads. Structure Show: Fake Steve Jobs on Hollywood’s tech fetish Dan Lyons, aka Fake Steve Jobs may be a Hollywood writer these days — he’s working on season 2 of HBO’s Silicon Valley — but he always keeps a sharp eye on what’s happening in tech. And his take is always entertaining. Worth a listen.   Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How new devices, networks, and consumer habits will change the web experienceHow companies can grow by moving into newer, bigger marketsWhat mattered in social business in the first quarter of 2014

Read More...
posted 5 days ago on gigaom
I love to cook. My ideal vacation isn’t a trip to the beach or trek to a national park. I spend my off time with my butcher or fishmonger asking for strange cuts of meat or exotic sea creatures before roasting, smoking and curing them in my kitchen or backyard. Consequently, I’ve collected a sizable number of kitchen implements over the years, and among the tools in my drawers is every kind of cooking thermometer you can imagine: instant-read stick thermometers to meat probes that connect to my phone via Bluetooth. So when Supermechanical released its new Range iOS cooking thermometers that claimed to be able to replace the various gauges and probes in my drawers, I welcomed the opportunity to test them out. The Range Aqua in blue and Ember in red. Source: Supermechanical After a battery of tests on its Ember and Aqua thermometers, I’m not tossing out all my other temperature measuring gear, but in the Ember, I did find one of the simplest and best-designed smart thermometers that I’ve ever used. But if you’re a  casual cook who wants a handy gadget for gauging the doneness of your chops on the grill or the monitoring the temp of a roast an oven, then the Ember would be a handy solo thermometer to own, pulling off the tasks of multiple tools. Lamb and Pralines The Ember is a roasting thermometer designed to be jabbed into meat, while the Aqua, as the name implies, is designed to take the temperature of liquids such as oil for frying or boiling sugar for candy making. Both are made from high-heat resistant metals and silicone (up to 450 degrees) and both have clips in their handles so then can be attached to pots or roasting pans. Most significantly, though both probes have 5-foot silicone cords that connect to an iPhone or iPad’s headphone jack. A slickly designed iOS app is where you see all of your readings, set temperature alarms and generally follow the progress of a dish. For those of you experimenting with different temperature or timing patterns, you can save a historical graph of your dish’s progress. Photo: Kevin Fitchard I’ll start with the Ember because that’s the one I found myself using much more. Two weeks ago I put it to what I considered the ultimate test: a 7 lb. bone-in lamb shoulder roast. It’s a tricky cut of meat because it has a complicated bone structure that requires a lot of individual readings to get an accurate internal temperature. It spends four hours in the oven, but for the first thirty minutes I blasted it with 425 degree heat to get a good crust started before letting roast gently at 325 degrees. The Ember handled the challenge beautifully. I was able to leave the probe in my roast throughout the entire cooking process, while a lesser thermometer would have been damaged by the initial heat blast. The heat-resistant cord tailed outside of closed over door and linked to my iPhone on the counter. The Range app then track the general temperature of the meat throughout the cooking process , alerting me when it hit particular temps. But when it came time to test different sections of the meat for doneness, the roasting thermometer turned into an instant-read thermometer. I’d poke it into different areas of the roast and get a temp in less than 5 seconds. That’s an important thing to note because cooking thermometers these days are often designed to do one thing well, but fall down when presented with other tasks. If you want to get super-geeky about it the difference usually comes down to the type of sensor used: a thermistor versus a thermocouple. Thermocouples are generally regarded as the superior, faster sensors while thermistors are slower but more durable. (For a more detailed explanation  – as well as reviews of every conceivable digital and smart thermometer out there — check out Meathead Goldwyn’s exhaustive exploration of the topic on AmazingRibs.com). According to co-founder John Kestner, Supermechanical basically sourced the best thermistor it could find when it created Range. The result is a thermometer that can truly pull double duty. So instead of spending 5 minutes hovering over my open oven to take half a dozen temp readings, I was able to take them in 30 seconds. Photo: Kevin Fitchard For my test of the Aqua, I made pralines, those little pecan, cream-and-sugar candies hailing from New Orleans. The Aqua has a much longer (6 inches compared to the Ember’s 3 inches) blunted-nosed probe designed to remain suspended in a pot of liquid. It’s apparently very popular with beer brewers, but I found it a bit awkward to use. Unless I was using a very large pot, it basically got in the way of what I was cooking, and it’s tip constantly touched the bottom of the pot, which interferes with an accurate reading. An old-fashioned clip-on candy thermometer was much better suited for making pralines. And to be honest, Aqua uses the exact same technology as Ember so if you needed a liquid thermometer in a pinch you could use Ember for the task. The bottom line They may be versatile, but Range thermometers can’t do everything. They’re tied to your iPhone or iPad, dependent on its battery life and vicinity to use them properly. If I’m smoking a brisket or grilling over a very low flame, I’ll use my trusty iGrill, which has a base unit that I can leave outside, yet still get a constant temperature reading sent to my iPhone over a Bluetooth connection. But as a general-purpose cooking thermometer the Ember is very impressive. I think Kestner and his team managed to combine the best aspects of Silicon Valley industrial design with true kitchen utility. I love my iGrill, but operating it is sometimes like programming an 1980s-era clock radio (you hold this button down while pressing this button for so long, etc…). The Range app is clean yet simple to navigate. To set a temperature alert, you just hold down the your finger on the screen and move it up and down to set the temp you’re aiming for. To set a timer you move your finger in a clockwise or counterclockwise motion. What I’m most excited about seeing, though, is the cooking technology ecosystem Supermechanical plans to build around Range. The company has launched a Kickstarter project for what it calls Range Oven Intelligence, which basically creates a wireless hub that acts as an intermediary between your thermometer and other gadgets and appliances. Supermechanical wants you to be able to monitor your roast’s temperature from your LG TV or your Pebble smartwatch. It’s opening up its APIs so kitchen app makers can access its thermometer readings directly from their recipe pages. Instead of just providing a nifty connected tool, Supermechanical has ambitions to help build the connected kitchen. A s The Ember and the Aqua both sell for $69.95 on Supermechanical’s website.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How Apple’s HomeKit will change the smart home marketA look back at this year’s CES and what it means for tech in 2014Where the internet of things and health care meet

Read More...
posted 5 days ago on gigaom
While I’ve longed for a Chrome OS tablet for more than year — and even attempted my own project to make one — Google’s Chrome OS platform isn’t ready for that. It is a step closer though. Last week, handwriting recognition appeared in the on-screen keyboard in the early Canary channel of the software. A few days later, it progressed to the Dev channel. Does this mean a Chrome OS tablet could be in the cards? There are still other aspects of Chrome OS that don’t fit well with that idea but it’s not out of the question; particularly as the interface gets redesigned in what might be more touch-friendly. We discuss that possibility on this week’s Chrome Show podcast and also talk about Dell’s Chromebook success in education, Chromecast app discovery and new chips that may power Chromebooks and Chromeboxes. Tune in below or download the podcast here to listen to this week’s episode. Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How companies can grow by moving into newer, bigger marketsHow the mega data center is changing the hardware and data center marketsForecast: sizing the software-defined networking market

Read More...
posted 5 days ago on gigaom
When Hadoop first started gaining attention and early adoption, it was inseparable – both technologically and rhetorically – from MapReduce, its then-venerable Big Data processing algorithm. But that’s changing, and rapidly. With the release of Hadoop 2.0, MapReduce is taking a back seat to some newer technology. But of all the front-seat occupants, who will take the wheel? MapReduce in Big Data history Originally, the MapReduce algorithm was essentially “hard-wired” into the guts of Hadoop’s cluster management infrastructure. It was a package deal; Big Data pioneers had to take the bitter with the sweet. At first this seemed reasonable, since MapReduce is truly powerful, as it divides the query work – and the data – up amongst the numerous servers in its cluster, facilitates teamwork between them, and gets the answer. The problem underlying all of this is pretty simple. MapReduce’s “batch” processing approach (where jobs are queued up and then dutifully run) doesn’t cut it when multiple, short-lived queries need to be run in quick succession. Hadoop 2.0 introduces YARN (an acronym for “yet another resource negotiator”) as a processing algorithm-independent cluster management layer. It can run MapReduce jobs, but it can host an array of other engines, as well. Along comes Spark Meanwhile, separate from the development of YARN, an organization called AMPLab, within the University of California at Berkley, developed an in-memory distributed processing engine called Spark. Spark can run on Hadoop clusters and, because it uses memory instead of disk, can also avoid MapReduce’s batch mode malaise. Better still, Hortonworks worked with personnel at Databricks (the commercial entity founded by Spark’s AMPLab creators) to make Spark run on YARN. So far, so good. YARN provides a general framework for batch and interactive engines to process data in a Hadoop cluster, and Spark is one such engine, which utilizes Random Access Memory (RAM) for very fast results on certain workloads. A question remains though: what about other Hadoop distribution components – like SQL query layer Hive or data transformation scripting environment Pig – that have a reliance on MapReduce? How can those components be retrofit to take advantage of the shifts in Hadoop’s architecture? Up the stack Hortonworks, whose engineering team effectively spearheaded the work on YARN, also created a component for it called Tez that sandwiches in between Hive or Pig on the one hand, and YARN on the other. Hortonworks added Tez to Hadoop’s Apache Software Foundation source code as it did an updated version of Hive. Get the most recent versions of Hive and Hadoop itself and, bam!, you can use them interactively for iterative query work. Meanwhile, an industry consortium, which includes Cloudera and MapR, has announced it will be retrofitting Hive and Pig – as well most other Hadoop distro components – to run directly against Spark. Symbiotic adversaries Spark and Tez, which in most contexts likely wouldn’t be compared, suddenly find themselves competitors. Both of them pave the way for MapReduce’s diminished influence, and for interactive Hadoop to move to the mainstream. But with the competitive approaches they offer, there is a risk of fragmentation here and customers should take notice. In-memory engines work extremely well for certain workloads, machine learning chief among them. But making an in-memory engine the default for most every job, especially those that get into petabyte-scale (or higher) data volumes seems unorthodox. Batch-oriented MapReduce having exclusive placement for data processing wasn’t Enterprise-ready. YARN, Tez and Spark have all emerged out of the need to address that shortcoming. The irony here is that giving customers multiple ways to use the very same Hadoop distribution components isn’t especially well-suited to the Enterprise. The engines, united? If YARN’s open architecture is to enable multiple, nuanced, overlapping solutions, then an optimizer that picks the right one for a given query may be needed, so that the customer needn’t make that decision, query after query. Choice is good, but fragmentation and complexity are not. In the 1980s, the UNIX operating system splintered badly, and this impeded market momentum for that operating system. In this decade, Hadoop has become a data operating system. Hopefully it will avoid UNIX-like entropy.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How streaming data and machine learning impact the bottom lineWhy Hadoop as a Service might be right for youSector RoadMap: SQL-on-Hadoop platforms in 2013

Read More...
posted 5 days ago on gigaom
Employers aren’t always able to prevent employees from using their own devices while on the job.  This can cause problems for a company’s legal counsel when a lawsuit involves electronic discovery of company documents. More and more these of documents are being created on personal devices and stored somewhere on the internet. One possible solution for companies might be to enter into agreements that make the employee’s personal device the employer’s device as well. What is electronic discovery? Electronic discovery, or e-discovery, takes place in the pre-trial phase of a lawsuit, when the parties involved request evidence from each other. Companies face three basic challenges when complying with an e-discovery request. The first is identifying and collecting all of the information that meets the criteria of the request. Success is largely dependent on the data retention and governance policies of the companies involved. The inability to produce the information requested in a lawsuit can result in heavy fines. The second challenge is the actual processing of the information to meet the standards necessary for reviewing the information, which is the third and by far the most expensive of the three challenges. Most of the time the processing step involves carefully transforming the information to a standard image or document format like TIFF or PDF that teams of legal council associates spend days and even weeks reviewing.  While there is some software that can help with the review process, it is mostly a manually and tediously long process. Costs of e-discovery Complying with and executing an e-discovery request can be quite expensive. The Minnesota Journal of Law, Science and Technology found that e-discovery costs are in the range of $5,000 and $30,000 per gigabyte depending on a law firm’s rates. You may recall the 2008 case between Qualcomm and Broadcom, in which a federal magistrate ordered Qualcomm to pay more than $8.5 million in sanctions when Qualcomm withheld tens of thousands of documents. Cases like that one have contributed in part to the double-digit growth of the e-discovery market. Gartner expects the industry as a whole to grow from $1.7 billion in 2013 to $2.9 billion in 2017. Costs associated with e-discovery have companies evaluating their information governance policies in an effort to better manage the associated expenses. But attempts to keep those costs in check assume that companies have access to the information involved in the lawsuit, which isn’t always the case. Being able to access the information Most of the costs estimates associated with collecting and processing data prior to review assume that a company has the electronic information in its possession, or at least has the legal right to access it. But what if the information resides on a device that the company doesn’t have direct access to — a company’s personal device, perhaps? InsideCounsel maintains that dealing with privacy and ownership issues will become increasingly challenging and costly as the number of personal devices in the workplace rises. One reason for that rise is that companies are looking for opportunities to allow their employees to bring their own devices to work. Bring Your Own Device Companies pay employees salaries that in turn are used to purchase personal smartphones, tablets and computers. Many of these companies also equip their employees with work-related smartphones, tablets and computers. So in a sense, companies are paying for two sets of devices for one individual. Why not just have employees bring their own devices to work and save the cost of having to buy twice as many devices? It sounds good in theory, but in in practice, things don’t always go as planned. Not all Bring Your Own Device, or BYOD, policies are the same. Some companies prohibit employees from bringing their own devices in because it causes too much of a distraction. Others provide limited access to internet-based Wi-Fi that is separate from the company’s core network. But in order for BYOD to do what it was intended to, employees need to do real work on their personal devices – responding to email, editing documents and accessing company data. That’s where the challenges of e-discovery come to bear. In a global survey, Fortinet found that 70 percent of personal account holders have used their personal cloud-based storage accounts for work purposes. This shows that smart devices are enabling a complex situation: Bring Your Own Cloud, or BYOC. That same survey of 3,200 employees ages 21 to 32 showed that 51 percent ignored their companies’ rules regarding the use of personal devices at work, and just did what they felt was necessary to get their jobs done. That attitude is one reason that companies are thinking about how to reconcile the risks of e-discovery with the benefits and convenience that BYOD has to offer. Mandatory BYOD by 2017 In the near future, you might be required to have your own smartphone, tablet or computer. Just as employers are not required to provide employees with a means of getting to work, coming to work equipped with the tools necessary to do your job may soon become more commonplace in offices around the world. Historically, this is not that strange a concept: In the past, the person who owned the hammer and chisel was by default the town mason. VMWare, Ingram Micro, and Cisco are among the companies that have already implemented such policies, and Gartner predicts that 50 percent of employers will require employees to supply their own devices specifically for work purposes by 2017. If done in the company’s best interest, each mandatory BYOD policy would also adjust the company’s privacy policy, as well as force employees to surrender their personal devices when it is legally in the company’s best interest to do so. In this way, mandatory BYOD policies could help manage the risk associated with complying with e-discovery requests when the data resides on an employee’s personal device. Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Gigaom Research predictions for 2014How the customer experience can help businesses build applicationsConsumer products will drive enterprise breakthroughs

Read More...
posted 5 days ago on gigaom
The Supreme Court ruled last month that Aereo, whose service helped subscribers stream TV signals, is like the cable companies that need a license to transmit over-the-air TV. But when Aereo responded by trying to buy such a license, the Copyright Office stated it is not a cable service — leaving the start-up in a legal no-man’s land where it is likely to die. The situation sums up the surreal world of TV regulation, which is built on an outdated set of rules that serve to entrench the current model of TV distribution. Under this model, broadcasters charge to stick consumers with bloated bundles of channels, and newer internet services like Aereo get frozen out altogether. A legal sword for broadcasters The rules that created this mess are complicated but it’s easier to understand them if one starts at the beginning: specifically, 1976, which is when Congress required cable companies to start paying copyright fees to include over-the-air stations like CBS or NBC in the bundles of channels they sold to customers. Those payments take place through a compulsory licensing system, which require the cable companies to pay around 1 percent of their revenue to the Copyright Office. The office then distributes that money to whoever owns the rights to the shows. That is exactly what Aereo tried to do after the Supreme Court rejected its argument that it was not a cable service, and that it simply rented equipment for consumers to record TV on their own. The court instead lumped Aereo into what dissenting Justice Antonin Scalia called a “looks-like-cable-TV” category, which is what led Aereo to apply for a license. So what’s the problem? Why can’t Aereo simply hand over part of its revenue to the Copyright Office, as it has already tried to do, and start serving customers again? The problem is that Aereo also confronts a second set of rules that arrived via the 1992 Cable Act. Those rules give broadcasters the power to force a cable company to carry their signal, or else require it to obtain “retransmission consent.” These rules, ironically, were designed to protect broadcasters from the growing power of cable companies, but, in the view of some lawyers, the broadcasters “converted .. a shield into a sword.” Catch-22 Used as a sword, the rules mean the broadcasters can cause a channel to “go dark’ for months until a cable service agrees to pay a fee. This can create high profile disputes, such as a recent black-out battle between CBS and Time Warner Cable, that leave consumers caught in the middle. In Aereo’s case, CBS and the rest of the “big 4″ could also use this “consent” process as a way to force Aereo to buy not just the broadcast channel, but a bundle of others as well. Perhaps ABC might agree to sell its signal for $1 — so long as Aereo also agreed to pay for ESPN7 and the Schnauzer Home channel too. The point is that, under the retransmission sword, the economics of Aereo’s business look hopeless. Even if that were not the case, however, Aereo may not even get a chance to try in the first place. That’s because it’s likely to trip over yet another set of arcane rules that limit who can be considered a cable service. As the New York Times explained in 2012, regulators and the TV industry are choosy about who can call themselves a cable company  – it requires the company to demonstrate that, in TV-industry speak, it is a “multichannel video programming distributor” or MVPD. And for now internet companies are not eligible. That’s why the Copyright Office wrote this week that it believes Aereo’s service falls outside of the compulsory license category, and that “We do not see anything in the Supreme Court’s recent decision … that would alter this conclusion.” (The office’s word is not final, but courts have deferred to its definitions in the past.) So in other words, the Supreme Court says Aereo falls under a “looks-like-cable-TV” category, but a government office that oversees cable TV says it is not cable TV. It’s the very definition of a Catch-22. Aereo has one hope, though it’s a slim one. It can try to persuade a federal judge in New York that it should qualify for the compulsory license rules under the 1976 law – and that it can do so without walking into the retransmission sword of the 1992 law. An earlier TV service, called ivi, tried this before but fell on its face before an appeals court in 2012. Aereo is in a different position, however, in light of the Supreme Court ruling and because the 2012 ivi decision said ivi didn’t qualify because it was transmitting national signals — which is different from Aereo, which only picked up local ones. This legal argument, however, is barely more than a straw to grasp for Aereo. The company must first ward off a looming imminent injunction and, if successful, survive a protracted court battle which the deep-pocketed broadcasters can fight for years — something that Aereo probably can’t afford to do. Simply put, the game is rigged against an innovative internet TV service like Aereo no matter how it tries to enter the market. And it will likely stay that way until Congress rewrites the law.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Connected Consumer Q4: New Platforms and OTT’s Dynamic Duo DominatedConnected-consumer second-quarter 2014: analysis and outlookFrenemy mine: The pros and cons of social partnerships for online media companies

Read More...
posted 5 days ago on gigaom
Tired of slow web pages? So is Google, and the company thinks it has the key to speed up the web: smaller image files. Google’s WebP image format aims to replace existing image file formats like JPEG with a smaller file size, resulting in faster load times and significant bandwidth savings for website operators. But will the company be able to convince everyone to switch, or are we about to see even more media format fragmentation? WebP offers around a third better image compression than JPEG, which can add up to a lot of bandwidth savings and speed improvements, depending on how image-heavy a page is. YouTube was able to cut down page load times by up to 10 percent when it recently started rolling out  WebP video thumbnails. Google has also saved several several terabytes of bandwidth every day since switching images in the Chrome Web Store to WebP,  and reduced the site’s average page load time by nearly one-third. And when Google switched to WebP within its Google+ mobile apps, it saved 50 Terabytes of data every day. Google isn’t the only company supporting WebP. Netflix has begun to use the format within its new TV UI to load thumbnails more quickly. Facebook is using WebP to serve images within its mobile apps, and companies ranging from Tinder to Ebay are experimenting with WebP as well. It all began with video The development of WebP was a bit of an accident for Google. The web giant was working on a video format called WebM, which is based on its VP8 video codec. Of course, videos are really just a series of pictures, and while working on VP8, Google engineers realized that the format was really good at compressing key frames, which are basically the pictures at the beginning of a new scene or sequence. Google engineers were working on the WebM video format when they realized that the same technology could also be used to compress single images. One of the things that makes WebP interesting is that it combines features that were previously unique to competing image file formats. JPEG is good at compressing photos and other detail-rich images. GIFs can be animated, and PNGs can be transparent, and contain millions of colors. WebP can handle all of this, and a combination thereof. “You can have transparency in lossy images,” said Google’s WebP Product Manager Husain Bengali during an interview this week, adding: “You can get all of this in one format.” Google first announced WebP in 2010, and has since integrated it into both its own Chrome browser as well as Android, and released libraries that allow developers to add the format to their iOS apps. WebP has since also been adopted by Opera, and a there are a number of workarounds to bring it to other browsers. Altogether, up to 46 percent of all browsers in use support WebP, according to browser stats from Caniuse.com. About those other 54 percent… Of course, that leaves out 54 percent of all users. Firefox, Internet Explorer and Safari don’t natively support WebP, and it’s unlikely that the makers of these browsers are going to change their mind anytime soon. That’s because like so often, everyone has their own vision of how the future is going to look like. Microsoft is pushing for its own format, dubbed JPEG XR, to replace traditional JPEGs, and Apple has long steered clear of Google’s media formats. The most logical ally for Google would be Mozilla, which has traditionally been a proponent of open media formats. However, while the Firefox community has had a spirited debate about WebP, the foundation has remained sceptical of Google’s efforts. Here’s a statement sent to me by a Mozilla spokesperson: “WebP offers certain compelling features that JPEG does not, most notably an alpha channel, but compression efficiency is most important to us. We’re not yet convinced that WebP’s compression improvements or its feature set are strong enough to warrant the widespread introduction of a new image format on the Web, which will introduce fragmentation and compatibility issues during a lengthy transition period. We will continue to consider WebP and image formats in general, as we believe there is much room for improvement with images on the Web.” Mozilla spokesperson Mozilla instead opted to stick with JPEG, but make it more efficient. The foundation announced a few days ago that it is developing an optimized image encoder dubbed mozjpeg that is capable of shaving off around five percent of an image’s size on average, according to Mozilla CTO Andreas Gal. These efforts are being supported by Facebook, which is testing mozjpeg, and funding the development of the next generation of the encoder with a $60,000 donation. It’s true, there are some issues It’s worth noting that Facebook isn’t firmly coming down on either side of this debate; the company is just interested in improving page load times, and saving a few bucks on bandwidth in the process. To do so, it actually started to use WebP for some of the images on users’ Facebook pages earlier this year. However, the company quickly discovered that people aren’t just looking at their friends’ photos on Facebook, but instead also download them to share via email and possibly even print. And that’s when things got weird for some Facebook members, who simply didn’t know what to do when their usual apps refused to open files with a .webp extension. Facebook reverted to serving JPEG files again, and Google quickly responded by making Chrome the default viewer for WebP on its users’ computers. But the anecdote shows one reason why a transition to a new format can be tricky. Encdoding WebP images takes more of a burden on servers than JPEG — but saving bandwidth may be wroth it. Another issue is increased load on servers. Encoding WebP takes more compute power than encoding a JPEG file of the same quality. Bengali told me that his team has been working on making encoding more efficient with a recent release, but he also admitted that the higher complexity of WebP will always mean that it needs more resources for encoding. But that’s a trade-off worth making in order to speed up page loads and save on bandwidth, he argued. “In the long term, bandwidth savings will be more important,” said Bengali. How bad is fragmentation, really? So what will be the next image format to rule the web? Will it be WebP, JPEG XR or even just plan-old JPEG, possibly with slightly improved encoders? “That’s the big question that all of us would like to have an answer to,” Bengali said during our conversation. He admitted that WebP may not win all measurement tests, but insisted that it is a good combination of features and bandwidth savings. And it has the sheer force of Google, and Chrome, behind it. But without Internet Explorer, Firefox and Safari, this momentum only goes that far, and fragmentation seems inevitable. Which begs the question: How bad would it be if there was an image format supported by only half of the world’s browsers? End users wouldn’t necessarily notice, save for faster page load times, as their browser or app would simply display the images as before. Website owners on the other hand would have to figure out how to generate and serve different versions of the same image to different users, which could add some complexity, and mirrors what’s been happening in the video space, where different devices and browsers have long forced companies to encode files in multiple versions. The good news is that some of this complexity could be shouldered by Akamai and other content delivery networks, which have started to offer sites on-the-fly image conversion to WebP, resulting in faster-loading web pages for end users. Chrome and Opera can speed up any unencrypted website by transcoding images to WebP in the cloud. And in the mobile app world, WebP is increasingly becoming a safe bet because it is supported by Android, and iOS developers can elect to include the necessary libraries to decode WebP pictures in their apps. And even the mobile web is starting to get faster, thanks to WebP. That’s because the mobile version of both Opera and Chrome offer users ways to speed up their web surfing by transcoding sites in the cloud to make them more mobile-friendly. Part of that process is a conversion of images from JPEG to WebP, which combined with other mobile optimizations, helps Chrome to reduce mobile data use by up to 50 percent. So even if WebP doesn’t completely replace JPEG or any other image format any time soon, it will likely become part of many companies’ efforts to speed up the web. In the best case scenario, site operators will be able to relegate the heavy lifting to middlemen like Akamai, and users won’t notice anything at all, save for faster loading websites and more fluid app experiences. “It kind of is a win for everyone,” said Bengali. Images courtesy of nito / Shutterstock, Fer Gregory / Shutterstock, Sashkin / Shutterstock and Google.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How new devices, networks, and consumer habits will change the web experienceWhat the enterprise can learn from GM’s mega data centerHow the mobile-first world will transform the data center

Read More...
posted 6 days ago on gigaom
Most people understand why IBM wants to pal around with Apple and support all those iPads and iPhones flowing into the enterprise. But the deal also would have also given Steve Jobs era Apple a chance to claim victory over a long-time nemesis. And it gives us a chance to run this not-at-all-posed photograph of Ginni Rometty and Tim Cook again. IBM CEO Ginni Rometty and Apple CEO Tim Cook taking a casual stroll. Check out this week’s show to hear what Dan Lyons, aka Fake Steve Jobs, has to say about that deal and a raft of other topics. Lyons has decamped to Hollywood, well actually Culver City, to help write Season 2 of HBO’s hit Silicon Valley. He also weighs in on the parallels between a new TV show scrapping for funding and survival is very much like a Valley startup. (And if you want to know whether FSJ will make a comeback, you need to listen.) And we also hash out why cutting jobs at Microsoft is a necessary evil.  (Microsoft announced the layoffs hit hours after we spoke.)   SHOW NOTES Hosts: Barb Darrow and Derrick Harris Download This Episode Subscribe in iTunes The Structure Show RSS FeedRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Gigaom Research predictions for 2014How companies can grow by moving into newer, bigger marketsHow the mega data center is changing the hardware and data center markets

Read More...
posted 6 days ago on gigaom
Nokia’s grand Android experiment is over. In addition to thousands of former Nokia employees being let go from Microsoft this week, so too is Google’s operating system. Microsoft will surely keep developing its own apps and services for Android phones but it won’t be making the phones themselves. The Nokia X line started as a strategy to bring a Windows Phone user experience to low cost handsets, complete with Live Tile-like icons and Microsoft services. The idea was to get the Microsoft brand in the hands of customers in emerging markets where feature phones still rule the roost. Microsoft wants none of that and will instead work to get Windows Phone hardware costs down for these markets. LG is keeping Android around though, so don’t worry. This week, the company launched the G3 Beat, a “mini” version of its G3 flagship phone. Like most attempts at a smaller flagship, this is more of a cut-down handset as opposed to simply a smaller edition. The G3 Beat has a smaller — but still big — screen measuring it at 5-inches and gone is that pixel dense 2560 x 1440 resolution from its bigger brother: This display is 1280 x 720. LG opted to downgrade the Qualcomm Snapdragon 801 chip inside the G3 flagship all the way to a Snapdragon 400 in the new Beat. The device does keep, however, the laser auto-focus camera system. Inside you’ll also find a meager 8 GB of internal storage — expandable with a memory card — 1 GB of RAM, and Android 4.4.2. That means the G3 Beat will work with an Android Watch. Should you buy one? The easy answer to that question: It depends. I spent several weeks wearing an Android Wear watch — the Samsung Gear Live — and although I generally like what it offers, I’m not convinced most normal, mainstream consumers will. At least not when the price of entry begins at $199. Yes, my favorite feature is the built-in Google Now support, complete with voice input. I’ve longed for that since last August. But the watch doesn’t get my spoken commands correct without fail. And we’re talking about a solution that is convenient but not necessary when you likely have your phone in hand or nearby. I’ve been able to ask my Moto X various questions or task it with commands by voice when it’s still in my pocket, for example. Here’s the thing though: Android Wear is in its infancy. Google has put out a software product that’s good enough to get the platform going; it still needs a bit of refinement. Some handy settings — screen brightness is one — are buried in menus a little too deep, for example. And third-party apps are interesting but not completely compelling just yet. Give it time and Android Wear could be a very appealing watch for many more people besides diehard Android enthusiasts.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.What the global tablet market will look like by 2017The rebirth of hardware demands new definition of designWhat to expect for the mobile OS market in 2013

Read More...
posted 6 days ago on gigaom
Chromebooks in the education market are clearly picking up steam. Earlier this week, Dell said it was temporarily discontinuing direct Chromebook 11 sales to individuals because it can’t keep up with demand from commercial channels for the education-focused laptop. On Friday, Google reported one million Chromebook sales to schools in the second quarter of 2014. Along with the stat, Google published a blog post from David Andrade, the CIO for the Bridgeport Public Schools district explaining why he chose Chromebooks for the 23,000 student district in Connecticut. Among the reasons: “affordability and easy maintenance”; something we’ve suggested on the Chrome Show podcast time and time again.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How companies can grow by moving into newer, bigger marketsHow the mega data center is changing the hardware and data center marketsForecast: sizing the software-defined networking market

Read More...
posted 6 days ago on gigaom
We stumbled on to code references for Google’s Project Athena last month and now you actually can what it is for yourself. The Chromium OS team is calling it an experiment for now but it could be the underpinnings of an entirely new Chrome OS user experience. Chromium OS is the open-source platform Google uses and adds to for its Chrome OS computers. On Friday, Google evangelist François Beaufort showed off a screenshot with the new, card-like interface, which reminds me a bit of a digital Rolodex: Browser and app tabs are cards that rotate forward or back as if you’re spinning them towards or away from you. When we first saw code references to Project Athena, I suggested the new Chrome OS interface could be more touch-friendly but there’s little evidence of that in the single screen shot. About the only way I could see this being for a touchscreen is by swiping up or down on the display to move through the various cards; it’s difficult to tell anything more concrete without seeing icons or touch-points. That doesn’t mean you can’t see more, however. Beaufort says that anyone can follow along in the progress of Project Athena by “compiling the convenient “athena_main” target with ninja -C out/Release athena_main.” So if you don’t mind a little elbow grease, downloading and compiling, you can get a better look. Sounds like I have new project for this weekend.    Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How CIOs can reform the IT organizationWhat mattered in cloud in the second quarter of 2014How streaming data and machine learning impact the bottom line

Read More...
posted 6 days ago on gigaom
Want a preview of Material Design, Google’s new company-wide look? It’s going to make its proper debut on Android L this fall, but you can get a preview now on the web, as Google is rolling out redesigns of many of its most popular web apps, including Drive and Alerts. Google started rolling out its Drive redesign last week, but it appears to be available for more people today, and I haven’t seen the new Alerts before. To take the new-look Google Docs, Sheets, or Slides for a test-drive, first you need to click on the upper right hand corner and click “Experience the new Drive.”  If you’re not seeing the new design, check out google.com/docs, google.com/sheets, and google.com/slides, which are permanent new Google Drive URLs. In addition to the flatter, more colorful design, a few notable aspects of Material Design have made it to Google Drive: there are now no more checkboxes. Simply click to select an object, and double click to open it. The sidebar can be resized, and it’s easier to select several files by clicking and dragging a mouse over them while holding the shift key. Certain screens also show the sticky round button that is a major focus of Material Design — in this case, it creates a new document. But in my opinion, one of the handiest changes is the default document view (“My Drive”) combines both docs you’ve created and ones others have shared with you. Google Alerts has a purple splash across the top with a search bar to easily turn what you’re thinking about into an alert. Your saved alerts are below, with two easily-identified icons giving you options to either trash that alert or edit it. In general, the interface appears to be a lot more beginner-friendly than previous incarnations of Alerts, which were targeted towards power users. Lower on the page is a new feature, “Alert suggestions,” which allows you to create an alert by clicking the “+” icon and is reminiscent of Twitter’s recommended follows: a lot of big companies, celebrities, countries and musical artists. They appear to be the same for different users, but I’d love to see suggested alerts based on what I’ve searched for in the past. A selection from Google’s suggested alerts.    Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How to get maximum productivity in today’s workplaceHP’s Latest Ambitions: Connectivity is Key, but so is DifferentiationHow CIOs can reform the IT organization

Read More...
posted 6 days ago on gigaom
Gigaom’s Derrick Harris said last year that Dell, upon going private, “should let its freak flag fly and take some real risks to distinguish itself from the service-vendor pack.” Bitcoin wasn’t on Derrick’s list of ideas for Dell, but Friday’s announcement that the computer giant has partnered with Coinbase to start accepting the cryptocurrency does show how the new, private Dell is starting to wave its freak flag and open up to change. “We’re really excited to offer bitcoin as a payment option to our customers who are on the forefront of using this digital currency for their technology purchase needs,” said Paul Walsh, CIO of Dell’s commerce services, in an e-mailed statement. “Partnering with Coinbase to implement this solution in 14 days is a prime example of the new, more agile Dell.” By accepting bitcoin, Dell, with its nearly $57 billion in annual revenue, becomes the largest retailer on board with the cryptocurrency. Technically, it’s been possible for awhile to use bitcoin to buy Dell gift cards through Gyft, but this saves the work around and means Dell is on team bitcoin. It also means another big get for bitcoin payment processor Coinbase, which installed the system in just two weeks, according to its blog post. Dell has about four times the revenue of Dish Network, which previously held the record as largest bitcoin company and was also a Coinbase customer. Coinbase has been on a since it signed up Expedia in June and was added as a second option for Shopify merchants like Soylent last week. As part of the announcement, Dell is also offering 10 percent off Alienware purchases made with bitcoin (up to a $150 limit).  Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Why the virtual desktop now matters to the enterpriseHow to define the right multi-cloud strategy for your enterprise the first timeWhat the first-quarter 2014 meant for tech buyers

Read More...
posted 6 days ago on gigaom
By now, anyone who spends much time on social media has gotten pretty used to the deluge of information that occurs whenever there is a breaking-news event like the destruction of Malaysian flight MH17. Photos, videos and news reports about the details all go flying past in our streams, many of them from reliable sources — and yet a staggering proportion of them are wrong, either accidentally or in some cases deliberately. Photos are doctored, quotes manufactured and numbers invented. One of the most crucial journalistic skills is sorting out what’s true and what’s not in such situations, and while many professional journalists may not like it, thanks to the internet anyone can do this job if they have the inclination, the tools and the time. No one illustrates that better than British blogger Brown Moses, also known as Eliot Higgins, who has gone from being an unemployed office worker to a crucial source of real-time, fact-checked information about the war in Syria. Higgins didn’t get to where he is now because he is some kind of superhuman genius, he just applied himself to learning as much as possible about the conflict he was trying to understand, and then used a variety of tools and skills to relentlessly check and re-check the information that was coming in via YouTube, Facebook, Twitter and blogs. So what if you want to join in this process and help verify some of the information that is flying by — what can you do? Here are some tools, services and news communities that can help: Communities Storyful’s Open Newsroom: Storyful, now owned by News Corp., was one of the first companies to take a rigorous approach to verifying social media in real time. Editors and journalists who work for the company provide their services to mainstream media entities, using a variety of forensic practices, but the company also launched what it calls its Open Newsroom, a Google+ page where participants can help verify information on breaking news events. While membership is not open to anyone (Storyful approves contributors based on their skills and journalistic track record) it is a very useful resource. Grasswire: A startup founded by Austen Allred, Grasswire is like an open-source version of Storyful’s newsroom, in that anyone can participate by verifying news reports about a variety of breaking-news events. Each item that appears on the wire (mostly from Twitter, with other sources to be added in the future) can be upvoted, but also has buttons that say “confirm” and “refute.” If a photo is a hoax, for example, users can post a URL and/or a description of why they think it’s a hoax, all of which gets added to the item. Allred told me he wants Grasswire to be a real-time newsroom that anyone can contribute to. Reddit news forums: Reddit is a somewhat controversial suggestion, since the site (or rather, a small group of users on one sub-Reddit) became notorious for identifying the wrong man as one of the culprits in the Boston bombings last year. But that aside, Reddit is one of the sites that does a fairly good job during breaking-news events, especially in sub-Reddits like the Ukrainian conflict forum, which uses some of the new live reporting tools that Reddit recently rolled out, and which Allred said helped inspire Grasswire. Twitter/Facebook: Twitter itself is obviously one community (if we can call it that) that has become the go-to source for news, both verified and unverified. The fact that the real-time stream is both a source of facts and a source of hoaxes, misinterpretations and viral BS strikes some people as a negative thing, but Twitter and Facebook are double-edged swords in the same way the internet itself is: they can simultaneously be used to debunk hoaxes and to spread them. What you can do is exercise good judgment and avoid tweeting that too-good-to-be true report until it has been verified somehow — or work at verifying it yourself. Tools Reverse image lookup: Using the reverse image lookup provided by Google or other services such as Tin Eye, it’s quite easy to see if a picture has been posted before and/or manipulated by Photoshop or some other tool. Much of what Twitter accounts like PicPedant do, or services like the Ukraine-based StopFake, involves plugging a picture into such a lookup and tracking down the original. Austen Allred said that this is a big part of the verification that happens on Grasswire as well, and Gawker’s Matt Novak has a column in which he regularly debunks news photos. Looks like the Buk spotted near Snizhne was roughly 15km-20km away from the crash site when filmed http://t.co/INu1eatuKA— Brown Moses (@Brown_Moses) July 17, 2014 Google Earth/Streetview: As Eliot Higgins has pointed out during demonstrations of his techniques, checking photos or videos of weapon attacks or other events against existing imagery in Google Earth and other similar services is not difficult, just time-consuming. By looking at a particular location from multiple viewpoints, using measuring tools to confirm distances and verify structures, etc. a user can triangulate locations pretty effectively. Andy Carvin and the BBC’s UGC desk have compared shadows to the alleged time of day a photo was taken, and in some cases identified buildings or other key landmarks in satellite photos. Checkdesk: A project partially funded by the Arab Partnerships Fund and Google.org, Checkdesk is an open-source content management system that allows media outlets of all kinds to do live-blogging of news events in a way that makes real-time verification (either by staff journalists or “citizen journalists” and other contributors). Like Grasswire, the verification status of items is shown and items can be embedded in stories and webpages. Checkdesk says it has outfitted “1,400 citizen journalists, six leading news publishers and grassroots media collectives in five countries.” It supports tools like Tin Eye and Wolfram Alpha. Video capture: One of the things that often happens after news-worthy videos are posted to YouTube is that they suddenly disappear. Sometimes the terrorist group that posted them changes its mind, and sometimes they are hoaxes that have been discovered. In either case, it’s handy to have a copy of the original. Services like KeepVid and this Chrome extension allow you to capture videos and save them to your hard drive. For tips on checking the veracity of videos, it’s worth taking a look at the Citizen Evidence Lab, part of Amnesty International, which is designed to help human-rights workers and aid agencies verify videos. Reuploaded the BUK video youtube.com/watch?v=MiI9s-… filmed here old.wikimapia.org/#lat=48.012555… http://t.co/S7XELMOUY2— Brown Moses (@Brown_Moses) July 17, 2014 Capturing webpages: As Higgins and others have pointed out, Facebook in particular has a history of removing pages set up by militant groups in countries like Syria and Iraq that contain what the social network feels is violent imagery, etc. But in some cases this removes crucial information about the location and timing of military attacks and other events. By using a site called Alghayma, you can save a copy of a Facebook page and even in some cases go back in time to see one that has been deleted. There is also Google’s cache of pages, and the Internet Archive, which maintained a copy of the Vkontakte page that an alleged Ukrainian separatist deleted after taking credit for the Malaysian airline attack. These aren’t all of the services, tools or communities that are involved in fact-checking or verification, of course. Josh Stearns, formerly of the advocacy group Free Press, maintains an ongoing list of tools at his Verification Junkie blog, and there are a host of different approaches and tips in the free Verification Handbook — which was funded by the European Journalism Centre, with contributions from a number of journalists (Full disclosure: I contributed to a chapter on Andy Carvin’s methods for using Twitter as a real-time verification tool). And whenever fact-checking is involved, it’s worth paying attention to Craig Silverman of Regret The Error, who was also involved in organizing the Verification Handbook. The BBC also has a good collection of resources. So get out and do some journalism! Post and thumbnail images courtesy of Thinkstock / Digital VisionRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Frenemy mine: The pros and cons of social partnerships for online media companiesConsumer products will drive enterprise breakthroughsBest practices in optimizing content for social engagement

Read More...
posted 6 days ago on gigaom
This week’s bitcoin review takes a look at New York’s proposed bitcoin business regulations. Regulation is a hard, but necessary medicine to swallow Bitcoin has an identity problem. The voices in the community are numerous and diverse: We need to use it for remittances! No, we need it in consumers’ hands! No, focus on merchants! Let’s use it for day-trading! All of these voices are a bitcoin community and together they make up what we know as bitcoin today. These diverse voices, however, can also drown out the calls to preserve the anonymity that was once one of the virtual currency’s primary appeals — not least because it provided a payment platform for nefarious ends. Those voices are quieter now but haven’t gone away. It’s also why currencies like Darkcoin and the proposed DarkWallet are attracting attention — there’s still appeal for an anonymous payment system that only requires the blockchain to move bitcoin around the planet. That brings us to the state of New York Department of Financial Service’s proposed bitcoin regulation: BitLicenses. Simply put, the regulations are strict. Benjamin Lawsky, superintendent of the NYDFS, had been talking with the bitcoin community through public hearings and a Reddit AmA. This week, he unveiled the NYFDFS proposed bitcoin regulations, which will be posted on July 23 for a 45-day window for public comment. Most of the regulations set high-security standards, like FBI background checks for employees, and many of these the bitcoin community seems OK with. However, some of the requirements of the BitLicense requirement go against a lot of the ideas early bitcoin users had about what the currency could be. For example, to avoid money laundering, businesses must have the name and physical address of BOTH parties in the transaction: As part of its anti-money laundering compliance program, each firm shall maintain the following information for all transactions involving the payment, receipt, exchange or conversion, purchase, sale, transfer, or transmission of Virtual Currency: (1) the identity and physical addresses of the parties involved; (2) the amount or value of the transaction, including in what denomination purchased, sold, or transferred, and the method of payment; (3) the date the transaction was initiated and completed, and (4) a description of the transaction. New York Department of Financial Services That’s a far cry from the entirely anonymous early days of sending bitcoin from wallet to wallet without knowing who is on the other end. Bitcoin goes from being the internet’s version of cash to a highly traceable form of money that’s every movement (in a BitLicensed company, at least) is tracked. There are many other regulations as part of the proposal, and I recommend reading TwoBitIdiot’s dissection of the good, the bad and the ugly of proposal. My own two cents is that I believe bitcoin will benefit from New York’s BitLicensing in the long term. The cryptocurrency has been fighting so hard for validity and to shake its risky reputation that having a company that passes the high bar set by these standards would carry a certain level of trust. Companies outside of the bitcoin space would hopefully be more comfortable working with companies that have a BitLicense in New York. This may be a dividing moment for the bitcoin moment, as people choose between those pushing it to go mainstream, and those who burrow back into the deep web to find another virtual currency. Bitcoin now has a new face and the community will have to embrace its new identity. The market this week Last week, I said the price might dip below $600 if it continued on its downhill slide. Well, the market proved me wrong. Bitcoin price spiked between Friday and Saturday, jumping from $614 to $634. It’s since gone down, but only to a closing price of $622 last night. It seems less likely to fall below $600 next week unless there are some skittish players pulling out of the market because of the BitLicense announcement. For background on why we’re using Coindesk’s Bitcoin Price Index, see the note at the bottom of the post.  In other news we covered this week: Shopify pits two of bitcoin’s largest payment processors head-to-head and lets its merchants choose: accept bitcoin via Coinbase or BitPay? In the European bitcoin scene, Elliptic took in $2M to help firms handle bitcoin. Here are some of the best reads from around the web this week: What if the value of your digital coins were based on personal reputation? Wired takes a look at a new, but somewhat unconventional altcoin called Document Coin. Facebook, meet bitcoin (kinda). BitPay released Get Bits, a Facebook app where people can sign up and say that they have bitcoin to connect with others. Of course, you can’t trade bitcoin on Facebook yet, but it does link you with your Facebook friends that you trust to buy bitcoin. There’s a bet between Ben Horowitz and Felix Salmon over bitcoin (and the winner gets a pair of Alpaca socks). Fortune thinks Horowitz will win, but only in spirit. Someone hacked CNET’s user database over the weekend and put it up for sale for bitcoin — but in the end, it was all just a publicity stunt. Bitcoin heads to SouthAfrica after PayFast, a payment processor, started accepting bitcoin. The WSJ said its one of the first payment processors anywhere to incorporate the cryptocurrency. Bitcoin in 2014   The history of bitcoin’s price A note on our data: We use CoinDesk’s Bitcoin Price Index to obtain both a historical and current reflection of the Bitcoin market. The BPI is an average of the four Bitcoin exchanges which meet their criteria: Bitstamp, BTC-e, LakeBTC and Bitfinex. To see the criteria for inclusion or for price updates by the minute, visit CoinDesk. Since the market never closes, the “closing price” as noted in the graphics is based on end of day Greenwich Mean Time (GMT) or British Summer Time (BST).  Photo from StevanoVicigor/Pond5Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Bitcoin: why digital currency is the future financial system

Read More...