posted 9 days ago on gigaom
As regulators attempt to sift through the possible public harms and benefits of Comcast’s $45.2 billion plan to buy Time Warner Cable, we thought it was worth showing that if the deal takes place it could lead to a significant jump in the number of broadband subscribers getting a data cap. If we add Time Warner Cable’s 11.6 million broadband subscribers from the end of 2013 into the mix of customers with caps, the total percentage of U.S. homes that have some type of cap or other limit on downloads rises to 78 percent up from 64 percent today. That’s a significant jump, especially after the number of homes with caps plateaued after 2011 when AT&T hopped on board the bandwagon that Comcast started driving in 2008. A side note for data nerds: The percentage of capped consumers could be a bit higher because the Leichtman Research Group data we use to calculate subscribers only accounts for 93 percent of the total number of broadband subscribers. Now, it’s not to say that we will definitely reach that 78 percent, given that Comcast has pledged to divest itself of 3 million pay TV subscribers in order to help get the deal through regulatory screens. However, it’s unclear which markets might be divested and whether or not those markets would go to a buyer that also has a cap. Of the major cable providers in the U.S. only TWC and Cablevision don’t have caps. And even if you take out those 3 million broadband subscribers entirely, we’re still looking at 74 percent of the U.S. broadband subscribers hitting a cap. As a Time Warner Cable customer who currently doesn’t have a broadband cap, I can’t say that I view this deal as a good thing. I imagine that the 10 to 13 percent of U.S. homes who would join the capped majority would feel the same. There’s still time for the FCC to take a harder look at caps — or as Comcast calls it, a data threshold. For those who want to see who’s capping their broadband, check out our chart from November 2013.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How consumer media consumption shifted in the second quarterWhat the shift to the cloud means for the future EPGHow the truly smart home could finally become a reality

Read More...
posted 9 days ago on gigaom
Aereo has plans to expand to 50 cities within the next 18 months if it wins its Supreme Court case, reports the Houston Chronicle, which recently got a tour of the Aereo facility there. The company is still keeping mum on current subscriber numbers, but CEO Chet Kanojia told the Chronicle that it’s already profitable in Houston, where it has hardware to serve up to 40,000 subscribers. Aereo has to defend itself in front of the Supreme Court in two weeks. Story posted at: houstonchronicle.com To leave a comment or share, visit: Aereo wants to expand to 50 cities if it prevails in courtRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.The biggest third-quarter events in the consumer spaceWhat the shift to the cloud means for the future EPGWhat the shift to the cloud means for the future EPG

Read More...
posted 9 days ago on gigaom
In two months a Vancouver-startup called Mojio will start selling its connected car module, a plug-in device that connects your car to the cloud via a T-Mobile’s network and your phone via Bluetooth. While there are a several gadgets in the market that promise to turn your unconnected car into a connected one, Mojio has an interesting take on the market. It wants to turn its plug-in car module into a application development platform. We first reported on Mojio back in 2012 when it kicked off an Indiegogo campaign for its module, which plugs into the onboard diagnostic (OBD) port in all cars made in the last 18 years. Like competing devices Mojio’s module can upload acceleration, braking and engine alert information into your smartphone, but Mojio layered on a bunch of other apps that integrate that driver data with social networking, contacts, calendar and SMS features on your phone. Mojio launched the device in beta with its Indiegogo contributors last year, and in October it raised a $2.3 million seed round led by Relay Ventures. Now it’s getting ready to release its commercial module to the public with several upgrades it’s hoping will set it apart from competitors like Automatic and Zubie, CEO and co-founder Jay Giraud told me an a recent interview. Most significantly Mojio is opening up APIs to developers, letting them design apps for the gadget the same you’d design apps for iOS or Android. Those apps can tap into all of the vehicle diagnostic and location data Mojio draws from the car’s control access network (CAN) as well as social networking and communications tools Mojio has built into Mojio’s cloud-based platform. Those apps can be added to a user’s module from what amounts to connected car app store, Giraud said. Giraud said Mojio is working with multiple developers for its upcoming launch. One developer he did name, however, is Glympse, which is looking to integrate cars into its location sharing app. Right now Glympse lets you share your location temporarily from your smartphone, but inside of Mojio it becomes a beacon that would allow you to keep constant tabs on the location of your car.   Second, Mojio is partnering with T-Mobile US to connect its module to its HSPA+ network and sell the module through T-Mobile’s retail channels. Mojio hasn’t finalized the exact pricing details, Giraud said, but its looking at two separate payment models: one in which you buy the device for $149 with no subscription fee whatsoever (including no data connectivity charges) or a monthly subscription fee around $6, which includes access to both its cloud-based services and network access. Customers who signed up for the monthly plan would pay nothing for the hardware, Giraud said. Mojio finds itself going up against a growing number of in-car module makers and an app makers, each with a slight different approach to connected vehicles. Zubie also offers mobile network connectivity charging $100 a year for a subscription. Automatic relies solely on Bluetooth to communicate with your phone, while Dash recently launched its own software-only service that uses any off-the-shelf to diagnostic interface gadget to connect your smartphone to the car. By launching with a stable of third-party apps, Mojio hopes to differentiate itself from that pack. That strategy means attracting developers, which themselves are attracted to devices that ship in large volumes. While that developer community could take a while to build, Mojio’s module will have plenty  of functionality to make it useful in the interim. Mojio’s future roadmap also includes new hardware advancements, Giraud said. Mojio is looking into a building a module that includes both LTE and Wi-Fi connectivity, which would not only connect cars to the internet but the tablets, smartphones and other Wi-Fi gadgets passengers bring with them.    Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Gigaom Research predictions for 2014Why LTE in the iPhone mattersHow new devices, networks, and consumer habits will change the web experience

Read More...
posted 9 days ago on gigaom
In the business world, the voice is a powerful thing. In meeting rooms, offices and conference calls, it’s how ideas are generated, mandates given and gauntlets thrown down. Yet, somehow, the record of all these discussions doesn’t quite do them justice: messy handwritten (and probably incomplete) notes, typed meeting minutes that don’t distinguish idle chatter from meaningful business or, worse, no record at all. Thanks to advances taking place in computing and machine learning, that’s all about change. Take, for example, a startup called Gridspace that wants to make meetings more productive by outsourcing note-taking to a machine. It’s a challenging problem to solve — any solution must provide a seamless experience, as well as be accurate — but the company is trying to do it right. It has built product that bundles smart hardware and applications with several flavors of speech recognition, voice recognition and natural language processing. The most noticeable piece of the puzzle is the hardware — a simple, small recording device called the Memo M1 that sits on a desk or table. It’s always on, although its ambient light and motion sensors let it kick in only when someone is actually in the room. It has radio sensors to help determine who’s in the room based on their mobile phone fingerprints, although voice recognition helps makes this more accurate as does pre-planning the meeting using the Memo app and listing the participants. The Memo service works with conference lines, as well (it can be set up to automatically call participants) and there’s a mobile app available for recording conversations on the road. After a meeting is done, Memo will email everyone the highlights of the meeting and provide them an opportunity to go through and comment on or flag certain parts. The next day they’ll receive a fuller digest, complete with that post-facto information. At any time, participants can listen to the highlights of the meeting, which presumably are important points or action items, or they can hear the whole thing. They can search for specific parts by word or person. The Memo mobile app. Source: Gridspace Gridspace CTO Anthony Scodary described the user experience design as being focused on minimizing changes to how we go about our days in the office. Set up to its fullest potential, Memo users don’t have to press a button, set up something in an app, or even speak a command at something to take advantage of the service. “It’s really just [about] designing interfaces … that make something that you don’t have to change your natural behaviors much,” he said. Getting it right means getting NLP right As seamless as the experience might be, though, it’s Gridspace’s work on natural language processing and speech recognition that could make or break the company. All the automation and search capabilities in the world don’t mean much if a system designed to capture meetings can’t understand what’s happening or what’s being said. And after all, as Scodary acknowledged, “The end goal [of Memo] is to generate what is essentially the highlight reel of a meeting.” Memo has several methods for deeming what might be important, ranging from certain keywords being spoken (e.g., “This is important.”) to someone manually pressing a button on the M1 device to flag it as important. Even changes in volume or lots of people talking over each other might indicate a key part of the conversation. However, as with many machine learning systems today, it’s the input of humans that will help train Memo to be as accurate as it can be, Scodary explained. The more that people go through afterward and verify the system was correct, or flag important parts it missed, the smarter it gets. When someone “inputs unambiguously that something is important,” he said, Memo analyzes the context around those sections and readjust the weights in its algorithms accordingly. Pressing to flag content or mute the recorder. Source: Gridspace Out of the boardroom and into the hallway If Gridspace, which is still in the process of closed pilot projects and taking reservations for its M1 devices and mobile app, can pull this off, it could have promise even beyond the conference room. Scodary envisions a future where people have Memo devices sitting on their desks, ready to capture an impromptu brainstorming session or maybe just a short chat about the all-hands meeting earlier in the day. “We’re very interested in those three-minute meeting between your other meetings,” Scodary said. (And don’t worry: there’s a mute button if you’re going to complain about the boss, and Scodary said the company is working on features for voice commands to strike previous comments and to delete parts of a meeting that has already happened.) Frankly, this vision is the kind of thing one can see a company like Microsoft or Google chasing, too, as they strive to own productivity by owning the crossroads of collaboration, communication and devices. This type of technology could find its way into an already sensor-packed smartphone, tablet, desktop or even wearable — Intel recently showed off a new mobile processor designed with voice recognition in mind — and integrate with existing office suites and meeting applications. Their teams of artificial intelligence researchers – who have already made speech recognition commonplace on smartphones and gaming systems, and who are advancing the the state of the art in language understanding – could help make such a system faster, more accurate and even predictive. At home or in the office, our voices could soon be just as important inputs to our computers as our keystrokes. Once we figure out how to avoid putting our collective foot in our mouth, we’ll probably be thankful for it.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Why we must apply big data analytics to human-generated dataSiri: Say hello to the coming “invisible interface”Sector RoadMap: Social customer service in 2013

Read More...
posted 9 days ago on gigaom
The National Security Agency has known about the Heartbleed bug, which has compromised two-third of the world’s websites, for over two years, and has been actively trying to exploit it, according to reports. The revelation, which is likely to outrage a security industry already furious at the NSA, comes by way of Bloomberg, which cites two unidentified sources and reports: “Putting the Heartbleed bug in its arsenal, the NSA was able to obtain passwords and other basic data that are the building blocks of the sophisticated hacking operations at the core of its mission, but at a cost…The agency found the Heartbeat glitch shortly after its introduction, according to one of the people familiar with the matter, and it became a basic part of the agency’s toolkit for stealing account passwords and other common tasks.” The news comes as companies and governments are still reeling from last week’s disclosure of Heartbleed, which lets attackers penetrate OpenSSL, the open source protocol used to encrypt passwords and other sensitive data. The vulnerability has exposed companies like Yahoo and Google, as well as hardware providers like Cisco, and led the Canadian government to temporarily shut down its tax preparation service. For now, however, it’s not clear how much actual damage has been done — or if only a handful of people, including those at the NSA, knew about the vulnerability. Some reassurance came today when security service CloudFlare said it is unlikely that hackers have been able to use Heartbleed to obtain private SSL keys used by websites. Companies have been actively patching their sites since last week’s disclosure. While Heartbleed represents a useful weapon for the NSA to spy on its opponents, the agency’s failure to disclose it will anger those who believe that the U.S. government should focus on defensive measures like encryption and security — rather than using compromised standards as a means of attack. The NSA is still under criticism following disclosures by former contractor Edward Snowden that it deliberately introduced weaknesses into other global encryption standards.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Who to watch in the growing European cloud marketHow data can create security in the modern enterpriseHow Hadoop passes an IT audit

Read More...
posted 9 days ago on gigaom
Chromecast owners just got a few more ways to beam audio to the big screen: Player FM, a podcast app and cloud service that we previously covered on Gigaom, added Chromecast support to its Android app Friday. Also now Chromecast-capable is Rocket Music, an Android music player that includes features like an equalizer and lyrics viewing. Don’t want to listen to your podcasts or music on your TV? Then you can always turn Chromecast into a networked audio player.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Why the TV industry matters for GoogleHow mobile will disrupt the living room in 2014Report: The Live-Stream Video Market

Read More...
posted 9 days ago on gigaom
The University of Southern California’s robots usually lurk in dim basement labs and rooms tucked at the end of winding corridors. They didn’t actually emerge on Thursday, but the public was invited inside for an intimate look at what the school’s engineering and artificial intelligence experts have been building. First, meet the ARM robot, which uses a camera and depth sensors to see an object and pressure pads on its hands to register when it has grasped an object. This cute little guy is the NAO robot, which pops up at a lot of robot expos doing very different things. I’ve seen it play soccer and act as a social companion. But at USC it was actually leading an exercise session, inviting its observers to do lunges, squats and jumping jacks. It acted out each movement. USC’s other interactive robots included Romibo, dressed up like a dragon… …and the school’s Bandit-II robot, which invited participants to copy its movements. If they messed up, it made them start all over again. Bandit-II’s lips and eyebrows move, giving it a wide range of emotions. One of  the more unique robots was the EcoMapper: an autonomous underwater robot that can collect data on water quality and map the floors of bodies of water. The Zeus 3D printer also made an appearance. When I wrote about the Zeus last year, most of the images were renderings. But now the machine is very real. I saw it print and scan, and got a glimpse of the beautiful user interface its team promised. National Robotics Week events are going on through Sunday all over the country. Check out events near you here.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.GigaOM Research highs and lows from CES 2013Legal challenges and opportunities for 3D printingBitcoin: why digital currency is the future financial system

Read More...
posted 9 days ago on gigaom
With the Samsung Galaxy S5 now available, PayPal is making good on its promise to use the handset’s fingerprint reader. The company released mobile apps specifically for the Galaxy S5 and Samsung’s latest wearables on Friday. Using the phone app, you can log in to your PayPal account with a fingerprint scan instead of a typed password and make payments online or at participating retail locations that access PayPal payments. PayPal actually announced the software in conjunction with the Galaxy S5 introduction at February’s Mobile World Congress. Until now, however, no devices were available to use the app. Here’s a short demonstration of how the PayPal app works: The idea of using a fingerprint for account authentication over a typed password is rather timely, given how many sites are now affected by the massive HeartBleed security flaw may have exposed passwords on two-thirds of the world’s servers. Clearly, neither PayPal nor Samsung knew this would happen when they announced the mobile payment feature in February. The situation could bring awareness to Samsung’s newest phone since it uses biometrics instead of a password. Even if that fingerprint data is stored on PayPal’s servers, they aren’t affected by HeartBleed according to LastPass. Samsung’s newest handset isn’t the only device that can use a new PayPal new app, however. PayPal is available on the Samsung Gear 2 smartwatch and Gear Fit wearable so you can make payments, redeem offers and receive payment notifications on your wrist.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Noteworthy mobile developments from the third quarter 2013Consumer privacy in the mobile advertising eraBitcoin: why digital currency is the future financial system

Read More...
posted 9 days ago on gigaom
We’re getting closer to a Chrome OS tablet thanks to Lenovo, which is showing off its touchscreen convertible Thinkpad Yoga 11e. The $349 Chromebook arrives in June and although it’s geared for the education market, Lenovo is taking a cue from Dell and planning to sell the Yoga 11e to consumers as well. Lenovo announced the device back in January and is now getting ready for the product launch. Brad Linder of Liliputing got a chance to use an early prototype — that’s why the touchscreen doesn’t work 100 percent for him — and shared this video demonstration of what to expect from the Chromebook. Clearly, the Yoga 11e isn’t the first touchscreen Chromebook to hit the market. Google’s Chromebook Pixel claimed that prize when it launched a year ago and Acer followed with a lower-costing touchscreen model of its C720 Chromebook. Lenovo can claim to have the first convertible touchscreen Chromebook, however because like other Lenovo Yoga products, you can fold the screen all the way to the back of the laptop. That makes the on-screen keyboard in Chrome OS a bit more valuable because the Yoga 11e can essentially be used like a Chrome OS tablet as needed. Or you could flip the screen back up and use the traditional ThinkPad keyboard. I suggested this very use case earlier this year noting that it would be more likely to see this type of form factor instead of an actual Chrome tablet because the Chrome OS isn’t yet touch optimized.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Cloud and data second-quarter 2013: analysis and outlookHow the mega data center is changing the hardware and data center marketsGoogle Chrome OS: What to Expect

Read More...
posted 9 days ago on gigaom
Remember Staples? It’s the latest company considering offering 3D printing to customers, after announcing today that it has partnered with 3D Systems (DDD ticker) to test printing stations in two stores. This isn’t Staples’ first venture into 3D printing. Some of its European stores began offering to print items for customers on Mcor printers, which build 3D objects with layers of paper, last September. But the new U.S. centers are meant to be more experiential. Someone who is totally new to 3D printing can walk in and, with the help of staff members, design an object on the spot or print a premade design. Staples is also offering photo booths where customers can take a 3D picture of themselves and then print it. A 3D photo booth in a Staples store. Photo courtesy of Staples. The service will compete with UPS’s rollout of testing locations last year. While Staples’ 3D Systems printers are desktop, consumer-oriented machines, UPS is offering professional printers from 3D Systems rival Stratasys. As a result, Staples might be more appealing to an individual looking to print something for personal purposes, while UPS is more business and artist oriented. The “Kinkos model” is a potential direction in which 3D printing could move. People might not want to invest in their own desktop 3D printer; instead, they could travel to a central location when they need to print an object. It ruins some of the on-demand benefits of 3D printing (“My spatula broke. I’ll print a new one!”), but still allows people to make highly customized objects without paying hundreds or thousands of dollars for a personal printer. Staples’ plan to have customers come into the store and initiate a print job themselves instead of emailing it in ahead of time is unusual and might be off-putting, as it usually takes half an hour to hours to print a single object. But if it works, there would be a lot of people wandering around Staples stores with time to burn. A Staples 3D printing center. Photo courtesy of Staples.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Legal challenges and opportunities for 3D printingBitcoin: why digital currency is the future financial systemIs the 3D printing market a hype, a hope, or a threat?

Read More...
posted 9 days ago on gigaom
Like most companies, Twitter is happy to put out numbers that make the service look as popular as possible, like the 240 million or so figure it uses for “active” users, defined as anyone who logs in at least once a month. But it rarely talks about what many see as the most important number, namely the number who actually tweet — which is probably why estimates like the most recent one from Twopcharts, as quoted in the Wall Street Journal, has gotten a lot of attention: it says 44 percent of accounts have never posted a single tweet. As many people have pointed out — including Twopcharts itself — this kind of data is problematic at best, in part because it is based on fuzzy estimates rather than data that comes directly from Twitter. It’s also difficult to figure out how many of the almost 1 billion accounts that Twopcharts says have been created since Twitter began were created by users who signed up again under another name. That said, however, the idea that Twitter has a billion or so accounts, but only 200 million of those users even sign in once a month (let alone post a tweet) and almost half have never posted a single status update seems somewhat troubling. But should it be? And if it is, what should Twitter do about it? .@rsarver @jasondfox also 99.99% (how many nines?) of TV viewers have never made a TV show.— Josh Elman (@joshelman) April 11, 2014 Is Twitter still too hard to use? We don’t expect everyone who reads blogs to have one, nor do we expect everyone who reads a book to have written one — but Twitter has always seemed different, in part because it is so easy to post a tweet. And yet, for anyone who follows the science of social networks, it’s not surprising that Twitter would fit the 90-9-1 ratio, in which the vast majority of users simply consume. There’s at least some evidence that Twitter is concerned about this number, because senior executives of the company have talked a number of times about moving the “scaffolding” of Twitter into the background somehow — by which it means the machinery that can often be confusing for new users, like the @ symbol or the hashtag or the retweet, or the fact that you sometimes have to use a period in front of your tweet so that everyone will see it. The number Twitter seems the most concerned about, however, is the overall user number — the one that caused some mild panic among shareholders and investors when Twitter admitted in its first-ever earnings conference call that it was flattening. Getting that figure — and the active-user figure — to grow is the reason why Twitter has been adding features and redesigns like a mad thing recently. Everything from experiments like @MagicRecs (which doesn’t seem to have performed very well) to the addition of new Facebook-style profile designs seem intended to make the service more appealing for new users. But there is still much work to be done, if the comments on a recent WSJ piece are any indication: Twitter needs to broaden its reach Twitter is also clearly concerned — as it should be — about the difficulty of finding new people or accounts to follow, and sorting through the massive amount of content that comes from half a billion tweets a day. That seems to be the rationale behind the company’s acquisition of Cover, a small startup that was working on an adaptive home-screen for Android devices, one which changed what it showed users based on their environment, time of day, etc. Just as Google is trying to do with Google Now, Twitter needs to get better at surfacing content automatically, without waiting for users to click and say that they are interested in a specific tag or keyword. The service’s “Discover” tab is relentlessly pathetic at this, despite the time and resources that Twitter has devoted to it — which could explain why two of the main designers responsible for that feature recently left the company. In the end, the number of people who actually post a tweet is always going to be a relatively small fraction of the overall user figure. That’s not to say Twitter shouldn’t be concerned about non-tweeters, but it has much larger fish to fry. It needs to figure out why some people don’t use Twitter at all — why they sign up and then never return. What can it do to convince them to stay? It’s not clear that imitating Facebook is going to work, but it has to do something. Post and photo thumbnails courtesy of Shutterstock / Tim StirlingRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How mobile will disrupt the living room in 2014Important notes for IT decision-makers from the fourth-quarter 2013What new identity management solutions can offer today’s enterprise

Read More...
posted 9 days ago on gigaom
In this week’s bitcoin review, we recap how Chinese regulation rumors are causing the price to fall. Is anyone to blame for the price downslide? Rumors continue to swirl that China is starting to crack down on bitcoin trading by freezing some bitcoin exchange’s bank accounts. In an announcement posted on its site yesterday, BTC Trade said that it had received notice from its bank that its account would be frozen on April 15 if it does not stop using it to conduct bitcoin-related business. Chinese exchanges Huobi and BTC100 also posted notices that they had received similar calls about their bank accounts. At the same time of the announcements, the price of bitcoin took a huge tumble, falling nearly 18 percent in one day. The market then rebounded when the governor of the People’s Bank of China said during an economic conference that it was out of the question for the bank to ban bitcoin, because they didn’t create it. Instead, according to the reports, he viewed it as more of an asset or a collectible, like stamps. While that did help fix some uncertainty in the market, it wouldn’t be out of the question for the price to see a couple more free falls if more Chinese exchanges are faced with the threat of frozen bank accounts. The market this week In a scary moment for bitcoin holders, the price dipped below $400 on Thursday and fell 18 percent to close at $360.84. It has since made a major rebound and is up 17 percent to $425 as of 10:45 a.m. PST. For background on why we’re using Coindesk’s Bitcoin Price Index, see note at bottom of the post.  In other news we covered this week: The MtGox drama continues as its CEO, Mark Karpeles, is likely to face arrest in the U.S. from its legal problems should he set foot on U.S. soil. Circle’s CEO thinks the future of bitcoin will be determined by central banks, standards bodies and corporate contributors — not quite the decentralized system of the early dreamers of bitcoin. Bitcoin continues its consumer-friendly approach after Cryptex announces a bitcoin-to-cash debit and ATM card. Here are some of the best reads from around the web this week: Ezra Klein’s new Vox Media got into the bitcoin game right away. It published its first piece on why bitcoin is a bad currency that will change the world along with 19 “cards” that explain in laymen’s terms what the currency is. A New York Times reporter wrote about his bitcoin befuddlement and his process in trying to understand it: “The first thing I found out? This is the closest thing in finance to riding an angry bull at the rodeo.” The largest bitcoin “mine” in North America looks more like a greenhouse than a traditional mine and its on the outskirts of a small town in central Washington. Bitcoin also goes to Washington — D.C., this time. Robocoin brought an ATM to Capitol Hill then taught congressmen how to buy cryptocurrency. Bitcoin is also headed to the classroom. NYU announced it will offer a class this fall on the legal and financial issues around the crpytocurrency world — that is if it still exists in the fall. Bitcoin in 2014 The history of bitcoin’s price A note on our data: We use CoinDesk’s Bitcoin Price Index to obtain both a historical and current reflection of the Bitcoin market. The BPI is an average of the three Bitcoin exchanges which meet their criteria: Bitstamp, BTC-e and Bitfinex. To see the criteria for inclusion or for price updates by the minute, visit CoinDesk. Since the market never closes, the “closing price” as noted in the graphics is based on end of day Greenwich Mean Time (GMT) or British Summer Time (BST).  Featured image from Flickr/BTC keychainRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Bitcoin: why digital currency is the future financial system

Read More...
posted 9 days ago on gigaom
I’ve been toying with Amazon’s Fire TV ever since the company released the set-top box last week, and I’ve been impressed by the speed and snappiness of the device, and drawn in by games that I didn’t think would matter much for me. But there are still a few things that are missing to make this a great device. Amazon is working on addressing some of these issues, but it chose not to pursue others – which I think is a mistake. Here’s five things that could be improved about the Fire TV: A new Netflix app Netflix launched a whole new user interface last November, featuring bigger preview pictures, new cues to help you decide what to watch next, a better search experience and more. Netflix launched this new user interface almost half a year ago – but on the new Fire TV, users still get the old UI. The company actually spent a long time refining this experience with eye-tracking, A/B-tests and more. The new user interface is now available on newer Roku devices, the Xbox 360, the PS3 and PS4  as well as various smart TVs — but not on the Fire TV, which is a shame. I’ve asked Amazon when they intend to switch over to the new UI, but have yet to hear back. Local file playback Online video services are great — but every now and then, you end up with a video on your hard drive that you just want to quickly play in your TV, without jumping through tons of hoops. One of the easiest ways to do this on many devices is to simply copy the file to a Flash thumb drive, plug it in and watch away. Not so on the Fire TV. The device does have a USB port, but local file playback currently isn’t supported, and I’ve been told by Amazon folks that it’s instead being used for accessories as well as developer support. The Fire TV has a USB port – but you will not be able to use it for local file playback. (Image: Amazon) Customers are instead advised to upload local media to the Amazon Cloud Drive. Of course, there is also Plex, which is great if you have a lot of media to share over your home network. But still, a simple file player app with access to the USB port, or possibly even networked hard drives, would definitely improve the experience, especially for less technical users. Third-party app installs Amazon made a big deal out of calling Fire TV open when it launched the device earlier this month in New York. That may be true for developers, but for consumers? Not so much. That’s because Amazon decided to get rid of a key feature when it forked Google’s Android operating system to tweak it for the big screen: Android allows users by default to install third-party apps while the Fire TV does not. Got an Android app that’s not from Google Play? Just change a security setting and you’re free to do whatever you wish with it. The same is possible on the Kindle Fire, but not on the Fire TV. “We want to make sure that any games or services on Fire TV offer a great customer experience for a TV,” an Amazon spokesperson told me via email, which is why the company doesn’t enable third-party app installs. Developers have a way to install their own apps on the Fire TV — but there is no similar option for end users. It’s true that apps that are optimized for mobile devices often don’t look great on the TV screen, but taking away the ability to install any third-party apps also cuts down on lots of potential. No adult entertainment apps, no apps that your buddy just built for a few of his friends and no way to easily preview an app that hasn’t officially been released yet. Granted, developers to have ways to bypass this restriction, and it’s probably only a matter of time until someone finds an easier way to install apps on a Fire TV — but it would be great if Amazon backed all of this talk about openness up with actions. Additional Amazon services One of the most puzzling details of the Fire TV launch was that the device went on sale without key Amazon services. The company previewed an impressive integration of its FreeTime kids entertainment subscription offering — only to announce that it wouldn’t be available until May. Also delayed by a month is the ability to access Amazon’s cloud music locker from the device. Fire TV will offer the Amazon FreeTime kids offering with parental controls — when it launches next month. (Image: Amazon) Granted, a few weeks of waiting isn’t all that much. But the delay makes you wonder whether Amazon couldn’t get all of its ducks in a row for the Fire TV launch, or whether it is preparing to launch a bigger content offering in May — perhaps a Spotify-like music subscription service? Better second-screen support Fire TV launched with some second-screen features for Kindle Fire owners, who are able to send Amazon Video content from their tablet to the big screen, read IMDB trivia while they watch movies and even mirror the entire screen of an Amazon tablet. It would be great if this kind of functionality was also available for other mobile devices, but Amazon is still playing catch-up with Chromecast and even Roku in this respect. Fire TV does support DIAL, the multi-screen protocol used by apps such as Netflix and YouTube, but the Fire TV YouTube app is the only one that currently makes use of it, and even that doesn’t always reliably work. Fire TV has some second-screen integration with the Kindle Fire — but it would be great if it also offered iPhone and Android phone users more features. (Image: Amazon) Here’s the good news: I’ve been told that more DIAL apps for Fire TV are on the way, and screen mirroring could soon work with other Android devices as well. “We are working on adding Miracast support,” an Amazon spokesperson told me. Once that’s done, you should be able to mirror the screen of your Nexus 7 tablet, or newer Android phone, on the Fire TV as well.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.The rebirth of hardware demands new definition of designThe evolution of consumer-media cloud storageHow to utilize cloud computing, big data, and crowdsourcing for an agile enterprise

Read More...
posted 9 days ago on gigaom
Woohoo! SmartThings has added support for TCP Lighting, Quirky Pivot Power Genius, and my personal favorite, the ecobee thermostat. This will allow people who currently use apps to control these devices to control them through the SmartThings app, cutting down the number of places you have to go to control your home and giving users a way to set automation plans that incorporate the newly supported gadgets. I’m excited because I’ll now be able to program an away mode that will lower my thermostats, cut my lights off, lock my doors and shut my blinds. Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.A look back at this year’s CES and what it means for tech in 2014Sponsored content: Why the smart home is finally ready for the mainstream marketHow the truly smart home could finally become a reality

Read More...
posted 9 days ago on gigaom
If you’re an ambitious startup less than two years old, check out the Structure 2014 Launchpad competition. This is a golden opportunity to strut your stuff on stage at Gigaom’s 7th annual Structure event for a premier audience of industry luminaries, reporters and VCs. Past contestants include Saltstack, last year’s chamipon, with its enterprise-focused configuration management tool. And, previous winners are Keen.io which went on to snag seed funding ; DotCloud, now known as Docker the super-hot container technology; and CloudSwitch, which was subsequently acquired by Verizon and is now a core of Verizon’s cloud story.  That’s some pretty heady company. If your company is selected as one of the 2014 finalists by Gigaom’s editors, you will make your  pitch before a panel of top venture capital execs and get the benefit of their feedback. And you get to check out the whole show, which focuses on how cloud technologies can facilitate, accelerate, and take advantage of the internet of things  with speakers including Amazon CTO Werner Vogels, Google SVP Urg Holzle, Microsoft’s Scott Guthrie and many, many more. Apply here, but hurry! The deadline is May 16.      Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.The Structure 50: The Top 50 Cloud InnovatorsGigaom Research predictions for 2014What you missed in cloud in the third quarter of 2013

Read More...
posted 9 days ago on gigaom
There was big security news this week, as a serious flaw known as Heartbleed was discovered that affects a large portion of the security software that runs on the majority of internet servers. Check this article out for a look into everything you need to know. On a lighter note, if you like Legos and robots, do yourself a favor and check out this post on TinkerBots. And now the reason you are here, the top jobs for this week: Navitas: IT Service Coordinator (Lowell, Mass., with travel required) Booking.com: Team Lead Development (Amsterdam) Booking.com: Software developer – willing to learn Perl (Amsterdam) Raytheon: Principal Software Engineer — Hadoop (Sterling, Va.) Microsoft: Software Development Engineer (SDE) II (Redmond, Wash.) We also have other listings from companies like Zappos.com, Citigroup, Northrop Grumman and more. Click here to see what else is on our job board.

Read More...
posted 9 days ago on gigaom
One of the benefits often cited for the use of open-source software is that because it is so widely available and open to review by developers, any security flaws will be caught sooner than with closed, proprietary systems. This week’s near-panic around the Heartbleed flaw in OpenSSL open-source encryption software, calls that contention into question. When you have internet security czars tell people to “stay off the internet,” there’s a problem. The vulnerability, which afflicted popular web sites and networking gear from Cisco and Juniper, has been around for more than two years but was brought to light by researchers at Google and Codenomicon early this week. That’s a long time. But the German programmer who claimed responsibility for contributing the flawed code in late 2011 told The Guardian that he, not the open source model is to blame. Robin Seggelemann said his update did what it was supposed to do — enable the “Heartbeat” feature in OpenSSL — but also accidentally created the vulnerability that caused all the hubbub. Seggelemann said he “wrote the code and missed the necessary validation by an oversight. Unfortunately, this mistake also slipped through the review process and therefore made its way into the released version.” So why did the resulting vulnerability stay under the radar for so long?  Because, in his view, OpenSSL, while widely deployed, is also under-funded. OpenSSL is “definitely under-resourced for its wide distribution. It has millions of users but only very few actually contribute to the project,” he told the Guardian. And that brings us back to the question of whether open-source software is always best compared to company-funded-and-supported commercial (paid) software. It’s good to debate the issue, but given the traction that Linux, Apache and perhaps OpenStack have gotten, this horse may have left the barn. And remember, commercial software companies haven’t exactly covered themselves in glory with regards to security. Most notably, security giant RSA reportedly shipped encryption software with a known backdoor. Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.What’s next in the world of software-defined networkingForecast: sizing the software-defined networking marketThe promise of SDNs in the enterprise

Read More...
posted 9 days ago on gigaom
According to Dan Primack’s email newsletter Kleiner Perkins greentech-focused Partner Amol Deshpande appears to have launched a startup called the Farmer’s Business Network. Here’s the SEC filing for it, which shows it’s raised $4.6 million (or a $6 million round) and the company’s headquarters are in Kleiner’s offices in Menlo Park. Deshpande is listed as President and CEO. Making agriculture more efficient through IT, as well as genetics, is a hot area for investment, while traditional “cleantech” has continued to decline. Deshpande lists on his bio that he’s worked on waste energy company Harvest Power, veggie protein startup Beyond Meat, and “a stealth agricultural company,” among others.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Flash analysis: the Fisker debacle and its implications on investing, innovation, and government incentivesCleantech venture capital heads east

Read More...
posted 9 days ago on gigaom
In October, Google revamped Android with the KitKat version, but few devices run the newest software yet. In fact, web traffic data in North America suggests the update for KitKat is even slower than it was for the prior version of Android, Jelly Bean. According to a report from Chitika Insights published Friday, around 10 percent of Android smartphones and tablets in the study are running Android 4.4 or better. In the six months following the release of Android’s Jelly Bean software, that figure was 14 percent, Chitika says. Chitika’s data is taken from websites in the U.S. and Canada that use the company’s ad platform, so this information isn’t an exact detailing of the entire market. However, Chitika’s data pool is large; this report was captured from tens of millions of ad impressions using Chitika’s network between March 31 and April 6. In terms of device types, the data suggests that both Android phones and tablets are getting updated or sold with KitKat at the same pace. Handsets running Android 4.4 accounted for 10 percent of measured web traffic in the study, while 10.6 percent of tablets used KitKat during the measurement period. I was a little surprised by the data, given that we saw some software updates available for Android 4.4 recently. Motorola has generally led the way, offering KitKat for both its Moto X and Moto G handsets as early as November. At this point, the four major U.S. carriers have all pushed KitKat to the Moto X. But one phone doesn’t make for a whole market, meaning there are plenty of devices from Samsung, HTC, LG and others that are still running Android 4.3 or older. Google’s data gives credence to the Chitika report, showing even fewer devices are running Android 4.4. For the period ending April 1, Google’s own dashboard shows that just 5.3 percent of all Android devices visiting the Google Play Store are running the latest software. That too is a proxy for the information as devices could be running Android 4.4 and not hitting the Play Store. Additionally, Google’s data is global, not regional. It’s interesting, however, that Google shows 17.8 percent of Android devices are still running the old Gingerbread software, while Chitika’s numbers suggest that figure is 20.3 percent. Is this a huge issue or challenge for Google? Not any more so than the company has faced with prior versions. From a consumer standpoint, there won’t be much of a front-facing difference if devices are running Android 4.3 or Android 4.4. Developer frustration, however, could be a factor. Why create or evolve apps with some of the newest features available in KitKat when millions of devices can’t yet use them?Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Is Android broken and if so, will Google fix it?What the history of handset platforms can teach the future of mobileHow companies can grow by moving into newer, bigger markets

Read More...
posted 9 days ago on gigaom
I can’t count the number of times I’ve accidentally closed a browser tab on my phone. If it’s ever happened to you, you’ll know it’s as frustrating as unsuccessfully trying to pick up an important incoming call a single millisecond after the other party disconnects. Google can’t help with the infuriating missed phone call problem but it is fixing your mistake when it comes to browser tabs. The latest Chrome Beta for Android has a new undo close tab feature to bring back a browser tab from the dead. With this new version of the beta browser, each time you close a tab, you’ll see a small message on the bottom of the display showing an Undo action. It’s very much like Google’s undo mechanism in Gmail for times you may have sent or deleted a bit prematurely. Tap the Undo button and your tab returns but you’ll only have a few seconds; after that, the Undo option gracefully fades away along with your browser tab. This latest beta of Chrome is built from version 35 of Chrome and includes a few other nice features as well. Google says the updated software can play full-screen video with subtitles and HTML 5 controls and supports casting for “some videos” with its Chromecast product. Also included is support for multi-window devices, suggesting the Chrome Beta for Android will work on Samsung phablets and tablets that run multiple Android applications simultaneously on the screen.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Takeaways from mobile’s second quarterGoogle TV: Overview and Strategic AnalysisHow to manage mobile security through productivity

Read More...
posted 10 days ago on gigaom
Intellectual Ventures (IV) is seeking a major new investment to expand its controversial patent trolling operations but, unlike on past occasions, Apple is not coming along for the ride. According to a Reuters report, Apple has turned down an invitation to join Microsoft and Sony in backing a new IV patent acquisition fund that could be used as a vehicle to extract licensing fees and file lawsuits against companies. The news comes at a time that Congress is working on a law, the Innovation Act, intended to curb abuse of the patent system through measures like fee shifting and new legal discovery rules that would make it harder for patent trolls to swamp their targets with litigation costs. The rise of patent trolls, which are a target of the proposed law, can be traced in no small part to Intellectual Ventures, which has armed thousands of shell companies with old patents in recent years. While Apple invested in earlier IV funds, its reluctance to do so again may stem from the fact that it is being swamped by trolls itself; in February, the company complained that it has had to go to court with trolls 92 times in the last three years. “Microsoft and Sony’s investments give IV a fresh war chest to buy new patents,” a patent analyst told Reuters in relation to the new IV fund. Earlier this year, Intellectual Ventures launched a Political Action Committee to lobby for patent trolls in Washington, a development that is likely galling for the companies who have had to pay off IV, and now must watch it use some of their money to seek protection from Congress to continue its trolling operations.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.What the global tablet market will look like by 2017How the app economy could reboot the EU economyHow mobile will disrupt the living room in 2014

Read More...
posted 10 days ago on gigaom
The Heartbleed security flaw in OpenSSL encryption that affected popular web and ecommerce sites has also infiltrated many of the Cisco and Juniper routers, switches and firewalls running those sites and the internet at large. In a Cisco security alert updated Thursday, the company said many of its products use a version of OpenSSL affected by a vulnerability. Cisco acknowledged that this “could allow an unauthenticated, remote attacker to retrieve memory in chunks of 64 kilobytes from a connected client or server.” Check out the Cisco update for a list of products that are or could be vulnerable. Juniper published a brief “high alert” on its support page, but customers have to log in for more information. Infected networking gear can be a tricky fix since many people or small businesses don’t necessarily update that gear over time. As security expert Bruce Schneier told Marketwatch: “The upgrade path is going to involve trash can, a credit card, and a trip to Best Buy.” In related news, application performance and security specialist Cloudflare posted an interesting blog on how serious Heartbleed can be if it can harvest 64 kilobytes of server memory and issued a challenge for geeks to do so.  If an attacker is able to exploit standard buffer over-read bugs to get that information it would be a “nightmare scenario … requiring virtually every service to reissue and revoke its SSL certificates.  Note that simply reissuing certificates is not enough, you must revoke them as well,” Cloudflare said. OpenSSL is used in an estimated two-thirds of all active sites. Researchers from Google and security firm Codenomicon found the flaw, and Codenomicon came up with the now ubiquitous Heartbleed logo.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Forecast: sizing the software-defined networking marketThe promise of SDNs in the enterpriseHow software-defined networks can meet the needs of IT organizations

Read More...
posted 10 days ago on gigaom
The sales organization may be a business’ richest source of data. Sales reps understand the value of their products, what sells and what doesn’t, the drivers that won or lost deals, and the margins necessary to justify their effort. Every success or failure builds that body of knowledge. So why, despite mountains of relevant, high-quality data and a wide range of analytics tools, is sales planning still run on hunches? Much of the reason is logistical. Data scientists are busy and in high demand, so custom reports are stale by the time they’re complete. IT-built reporting tools provide on-demand information but only within fairly rigid constraints. Neither solution is ideal for incorporating other forms of data from knowledge workers themselves that cannot be gleaned from automated analytics. Without timely access to the data they need to make daily decisions, so they rely on instinct over evidence. In this webinar, our panel will address these topics: What are the inefficiencies in current manual and automated planning? Which companies and industries have done the best job of democratizing analytics? What lessons about data-driven planning can other departments learn from sales? How should existing business intelligence systems, data warehouses and other analytics tools interact with new systems? What is the role of storage in analytics? How should businesses approach capturing and integrating non-transactional data? What are the cultural and operational concerns of bringing data-driven tools to operations? Speakers include: Andrew J. Brust, founder and CEO, Blue Badge Insights David S. Linthicum, SVP, Cloud Technology Partners William McKnight, founder and president, McKnight Consulting Group Simon Tucker, chief customer officer, Anaplan Register here to join Gigaom Research and our sponsor Anaplan for “Instinct meets evidence: pushing sales limits with operational big data,” a free analyst roundtable webinar on Wednesday, April 23, 2014, at 10:00 a.m. PT.

Read More...
posted 10 days ago on gigaom
The debate surrounding 3D printing is forcing legislators and regulators to rethink a broad range of legal issues, from patents to copyrights to liability.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Is the 3D printing market a hype, a hope, or a threat?A look back at this year’s CES and what it means for tech in 2014CES 2013: disruptions and disappointments from consumer tech’s biggest show

Read More...
posted 10 days ago on gigaom
Facebook has been trying for some time now to clean up or improve the News Feed by removing things it defines as “low quality,” and it announced another effort along those lines on Thursday — saying it will reduce the visibility of “like bait” and content that gets posted too often. But all of these efforts have a dilemma at their core: namely, how will Facebook differentiate between what it calls low-quality content and what users really want to see? In its blog post on the announcement, Facebook says that like-bait is content that “explicitly asks News Feed readers to like, comment or share the post in order to get additional distribution beyond what the post would normally receive.” And how do we know whether those likes actually generate more sharing of that content than a post would otherwise receive? The short answer is we don’t. Only Facebook knows that, based on its black-box algorithms. The network posted an example of what it means by like-bait: photos of a baby rabbit, a kitten, dolphins and a mosquito, posted by an account whose name is “When your teacher accidentally scrapes her nails on the chalkboard and you’re like whaaaaaat” (which would seem to break Facebook’s rules on real names, if nothing else). It asks users to like, share or comment — or ignore. There’s no question that many, perhaps even most, Facebook users would dislike this content intensely and vote to have it removed from their News Feed — except perhaps for younger users, who often enjoy that sort of thing, in part because it irritates adults. But I can think of other examples of content that might be considered like-bait that I saw friends willingly share, including photos of people fighting cancer who were trying to get a certain number of likes, and so on. It might be spam, but I still like it That kind of thing may not be “high quality” content, but some people clearly enjoy it. Part of Facebook’s dilemma can be seen in the blog post itself, when the company describes the difference between what people say when they fill out a survey, and what they actually do when they use the site — they click and share or comment, but then when asked, they say that they don’t like it. “People often respond to posts asking them to take an action, and this means that these posts get shown to more people, and get shown higher up in News Feed. However, when we survey people and ask them to rate the quality of these stories, they report that like-baiting stories are, on average, 15% less relevant than other stories with a comparable number of likes, comments and shares.” This is a little like the old days of TV analytics, where people would tell Nielsen that they only watched PBS and nature shows — but when Nielsen switched from surveys to actual monitoring software that tracked what people watched, it found that people’s viewing habits were dramatically different. As it turned out, many watched the same brainless sitcoms and goofy specials they claimed to have no interest in when they were filling out the survey. Facebook says that the changes won’t impact pages that are “genuinely trying to encourage discussion among their fans.” But how will it distinguish between those pages and the ones that are just posting like-bait? That’s not an easy question to answer, even if you have the click habits of a billion users to study. And Facebook is essentially saying: “We’re not going to pay attention to what you do — we’re going to purify your News Feed for your own good.” As I’ve tried to point out before, this is part of what makes life a lot harder for Facebook than it is for Twitter. The latter might get complaints about the stream being too noisy, but users know that for the most part, they are seeing the content they choose to see from the users they choose to follow. Not so on Facebook. Facebook is much more interventionist, because it is trying to create this Platonic ideal of a “digital newspaper” that CEO Mark Zuckerberg seems to have in mind. And so it removes content it thinks might bother you (whether it’s photos of violence in Syria or breastfeeding) and chooses the rest of your content based on secret algorithms that you can only guess at — and ones that content owners criticize for being a bait-and-switch. And that is a much harder job. Post and photo thumbnails courtesy of Thinkstock / jurgenfr and Thinkstock / Justin SullivanRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.NewNet Q4: Platform mania and social commerce shakeoutHow mobile will disrupt the living room in 2014Important notes for IT decision-makers from the fourth-quarter 2013

Read More...