posted 10 days ago on gigaom
In my quest to make 2014 the Year of the iPad, a professional photo editing program that interfaces with my Lightroom-based workflow was a big gap. This week Adobe released Lightroom Mobile (Free, but subscription required) and I took a look how it could help my photo workflow. Lightroom Mobile allows you to perform basic editing and photo culling features. It can also sync with your Adobe Lightroom 5.4 desktop client. There is, however, a huge gotcha for that. Pricing The biggest thing that annoys me about Lightroom Mobile is the pricing. It requires either a Creative Cloud license or at the minimum a Photoshop Photography Program license. Those run from $9.99 to $600. That’s a lot. Unlike Office for iPad, the app simply will not work without a subscription. While Office at least gives you the option to read files without an Office365 subscription, Adobe Lightroom Mobile greets you with a login screen when you launch the app. I also have a standalone Lightroom 5 license, but without a Cloud license I can’t sync my photos to Lightroom Mobile. Given the limited feature set of the mobile app, I think this is a huge miss for Adobe. What the app can and can’t do The biggest draw to Lightroom Mobile is that it can handle RAW files in a non-destructive manner. It can also sync with my collections on Lightroom 5.4. It has a small amount of presets and cropping tools you can use to adjust photos with, but they are pretty standard and about as good as most existing photo apps available. What I did like is that you can adjust the white balance either via presets, or picking a reference point on the photo. You can also adjust the contrast, brightness, highlights, shadows, whites, blacks, clarity, vibrance and saturation. You can also undo all edits to a photo. What it can’t do is the advanced editing you use Lightroom Desktop to do. You cannot have custom presets, adjust curves, sharpening, noise reduction, lens correction and the like. It’s also not a professional-level tool. For starters, your iPad display is not calibrated. In my case, being color blind and shooting down to black-and-white most of the time, this is not a problem for me. Hopefully, Adobe will add more features soon. Right now, the feature set is just too limited to justify a $10/month subscription. Syncing with Lightroom 5.4 Setting up syncing with Lightroom 5.4 is pretty easy. You go to the collection you want to share and check off a box next to the name. From there, Lightroom syncs down a Smart Preview of the photo. Smart Preview files are a new lightweight, smaller, file format based on the lossy DNG file format introduced in Lightroom 4. They also let you edit files not directly attached to your Mac. I use them to edit photos on the go when I’m not attached to my main drive at home. On the iPad, this helps keep the file size to a manageable level. You can also create collections on Lightroom Mobile and sync those back to the desktop version as well. You can import photos from your iPad’s camera roll, but not your PhotoStream. It’s also important to note that your photos are not synced through Adobe’s cloud services. So you can’t bring your iPad to a shoot, create a collection and have the photos already on your desktop when you get back to your desk. How it will integrate with my workflow My photo workflow is pretty basic. I import my photos from my camera’s SD card to Lightroom. I then go through the photos and pick or reject my photos. From there I do the needful on the photos via a collection of custom presets. Lightroom Mobile can certainly help with the culling process. I find using the iPad to go through photos a very relaxing part of the process. You can import your photos during a shoot and then view them with model to see what ones he or she likes. This saves a ton of time and helps eliminates the need to book other sessions for a reshoot. Other than that, I don’t see me doing any heavy photo editing on my iPad. I might see how a photo will look in B&W, but all my post-processing will still be done in Lightroom 5.4. Is it worth the subscription? If you do not have a Photoshop Photography Program subscription already, I see little reason to subscribe just to get Lightroom Mobile. Unlike Office365, where all apps can access files stored on your OneDrive, Lightroom Mobile does not access your Creative Cloud storage. If it did, and I had the ability to sync down a collection at will, that might make the subscription palatable. As it is now, the app should just be free since it’s more of a companion app to Lightroom 5.4.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How to manage the customer experience through mobile appsSurvey: How apps can solve photo managementThe state of the photo and video app market

Read More...
posted 10 days ago on gigaom
The OpenStack IceHouse release due this week promises more business-friendly enhancements including at least some support for rolling upgrades from the previous (Havana) release. As Red Hat product manager Steve Gordon wrote in a blog post last month: “The Compute services now allow for a level of rolling upgrade, whereby control services can be upgraded to Icehouse while they continue to interact with compute services running code from the Havana release. This allows for a more gradual approach to upgrading an OpenStack cloud, or logical designated subset thereof, than has typically been possible in the past.” If this works as promised — although unclear what “a level of rolling upgrade” means, it could be a big advance for OpenStack which has been dinged for the difficulty of upgrades – which required complete system shut down — something no IT person wants to even consider. Gigaom Research analyst Paul Miller has more on architecting OpenStack for the enterprise.  Expect more OpenStack news to emerge in the run up to the OpenStack Summit in Atlanta next month but rivals aren’t standing still: Apache CloudStack recently announced its 4.3 release with Hyper-V support. And Eucalyptus continues to push its Amazon-compatible private cloud infrastructure.  To hear more about how the private cloud market is shaping up, check out Structure in San Francisco where Chris Kemp, founder and chief strategy officer of Nebula; Marten Mickos CEO of Eucalyptus and Sameer Dholakia, group VP and GM for Citrix’ Cloud Platforms Group will reunite on stage to discuss private cloud choices. To get a taste of what’s in store from their panel, check out their appearance at Structure 2012. You won’t be sorry. Watch live streaming video from gigaomstructure at livestream.com Structure Show examines private cloud For more on how traction of various private and public cloud is shaping up, check out this week’s Structure Show in which RightScale VP Kim Weins takes us through the company’s latest State of The Cloud Report. Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How OpenStack can become an enterprise reality.The Structure 50: The Top 50 Cloud InnovatorsCloud computing 2013: how to navigate without a map

Read More...
posted 10 days ago on gigaom
About a year ago, I got fed up with my home Wi-Fi. No matter which router I bought, I simply couldn’t get reasonably good signal strength or consistently fast wireless speeds in certain rooms. Going back and wiring my home for data wasn’t an option, so I dropped $69 on the REC10 wireless range extender from Amped Wireless. It’s probably the best money I spent last year because it solved my wireless woes. Now the company has a newer model called the REC15A and I’ve been using it for the past several weeks. The new range extender costs $99 and I’ve found it’s worth the premium if you have a newer router like I do. It provides even faster wireless speeds; often coming close to the full home broadband speeds I can get with a wired connection. In fact, in some locations, I actually can get more than a 75 Mbps connection over Wi-Fi, the same as if I was connected directly to my home router with an Ethernet cable. Aside from the price, what’s different between the REC10 and REC15A? Three main things. 1. The older model supports the 802.11n 300 speed standard according to Amped Wireless. That means it should work well if you have an 802.11n router purchased in the last several years. The REC15A, however works with faster 802.11ac routers and I bought one of those, an Asus model, in 2011. And more mobile devices are now supporting the faster Wi-Fi: 802.11ac is supported in my Moto X, for example, as well as the latest flagship phones. 2. My router is dual-band, meaning it can broadcast using both the 2.4 and the 5 GHz frequency bands. The REC10 extender only works with the former frequency while the newer REC15A uses both simultaneously. That lets me run multiple networks across different channels; helpful because I dedicate one band solely for video content. Doing so keeps all of the other “chatty” devices and apps from affecting video content on the network. 3. Both extenders boost the signal and range of my home network but in this case, the older model does a slightly better job. The REC10 provides a 600 mW boost while the new REC15A outputs 500 mW. As a result, the range of the newer model is a little less by comparison. The difference is subtle but I can see it from time to time when checking actual signal strength in my home. I found, however, that it really hasn’t affected the speeds; I still routinely get better speeds when using the REC15A because of the dual-bands and faster 802.11ac wireless technology. The HTC One M8 supports 802.11ac making for fast Wi-Fi speeds all across my home with the REC15A installed. If you’re not getting the full speeds of your home broadband over Wi-Fi, I can definitely recommend both of the Amped Wireless range extenders. Which you should consider depends on your current router and how much range you’re looking for. With an older router, I’d suggest the REC10; or upgrade to an 802.11ac router and splurge on the REC15A. Already have an 802.11ac router? The answer is a no-brainer: the REC15A will be the better unit overall. Both are simple to set up: Just plug them into an outlet and configure the unit over a web connection. In under five minutes you’ll be able to experience fast in-home Wi-Fi in nooks and crannies you never could before. Now that my review unit is heading back, I’ll be ordering one of my own.  Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Inside the ultra-high-speed wireless home warsGigaom Research predictions for 2014How to balance cloud-based and edge-based mobile data with hybrid application design

Read More...
posted 10 days ago on gigaom
Remotely accessing a computer isn’t new and there are plenty of options to do so. One of the newest is coming from Google however: The company has been working on an Android version of its Chrome Remote Desktop app for nearly a year and a full release is likely imminent. A select few beta testers are using the software, which provides remote control of a Windows or Mac computer from an Android phone or tablet. We noted on this week’s Chrome Show podcast that the software will likely provide a better experience on a tablet, owing to its larger display. It’s not ideal to show a full computer screen on a small phone. The app appears to work similar to Google’s Chrome Remote Desktop extension which works with any computer that has the Chrome browser installed. Tune in below or download the full podcast episode here to hear our thoughts about Chrome Remote Desktop as well as news of the coming-soon Asus C200 Chromebook and the potential for Google’s Chromecast to become a daily dashboard for your television. Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Google TV: Overview and Strategic AnalysisHow to manage mobile security through productivityHow the consumer space battled licensing issues in the fourth-quarter 2013

Read More...
posted 10 days ago on gigaom
New Gigaom Research reports this week include Ben Kepes’ evaluation of macro technology trends and Aram Sinnreich’s continued research on 3D printing. Analyst Paul Miller also explores OpenStack deployment in unison with VMware virtualization. Note: Gigaom Research is a subscription-based research service offering in-depth, timely analysis of developing trends and technologies. Visit research.gigaom.com to learn more about it. Buyers Lens: How to utilize cloud computing, big data, and crowdsourcing for an agile enterprise by Ben Kepes This week, analyst Ben Kepes takes an 10,000-foot view of organizations today and the trends that threaten traditional business. Rather than viewing change as a threat, companies that embrace technology shifts will find opportunities to remain competitive, increase efficiency, and generate new business. In this report, Kepes highlights cloud computing, big data, and crowdsourcing as three key technologies every organization must consider, regardless of industry. Connected Consumer: Legal challenges and opportunities for 3D printing by Aram Sinnreich In this report, Aram Sinnreich reviews the undecided legal issues that have the greatest potential affect on creators, manufacturers, and other stakeholders involved in the 3D printing marketplace. Two of these major gray areas — patents and copyrights — have not been adequately addressed, leaving billions of revenue in question. For further analysis on 3D printing, be sure to check out his recommendations for companies impacted by additive manufacturing. Cloud: Architecting OpenStack for enterprise reality by Paul Miller The open-source cloud infrastructure project OpenStack has been top of mind for IT enterprise managers. Instead of throwing away existing investment in virtualization, this report proposes a hybrid approach and exemplifies integration between OpenStack-powered clouds and VMware virtualization. Analyst Paul Miller introduces OpenStack and then explores the benefits of implementing OpenStack alongside on-premise solutions. Featured image from Shutterstock/alphaspirit.  

Read More...
posted 10 days ago on gigaom
Compute and storage are essentially commodity services, which means that for cloud providers to compete, they have to show real differentiation. This is often achieved with supporting services like Amazon’s DynamoDB and Route 53, or Google’s BigQuery and Prediction API, which complement the core infrastructure offerings. Performance is also often singled out as a differentiator. Often one of the things that bites production usage, especially in inherently shared cloud environments, is the so-called “noisy neighbor” problem. This can be other guests stealing CPU time, increased network traffic and, particularly problematic for databases, i/o wait. In this post I’m going to focus on networking performance. This is very important for any serious application because it affects the ability to communicate and replicate data across instances, zones and regions. Responsive applications and disaster recovery, areas where up-to-date database replication is critical, require good, consistent performance. It’s been suggested that Google has a massive advantage when it comes to networking, due to all the dark fibre it has purchased. Amazon has some enhanced networking options that take advantage of special instance types with OS customizations, and Rackspace’s new Performance instance types also boast up to 10 Gbps networking. So let’s test this. Methodology I spun up the listed instances to test the networking performance between them. This was done using the iperf tool on Linux. One server acts as the client and the other as the server: Server: iperf -f m -s Client: iperf -f m -c hostname The OS was Ubuntu 12.04 (with all latest updates and kernel), except on Google Compute Engine, where it’s not available. There, I used the Debian Backports image. The client was run for three tests for each type – within zone, between zones and between regions – with the mean average taken as the value reported. Amazon networking performance t1.micro (1 CPU) c3.8xlarge (32 CPUs) us-east-1 zone-1a us-east-1 zone-1a 135 Mbits/sec 7013 Mbits/sec us-east-1 zone-1a us-east-1 zone-1d 101 Mbits/sec 3395 Mbits/sec us-east-1 zone-1a us-west-1 zone-1a 19 Mbits/sec 210 Mbits/sec Amazon’s larger instances, such as the c3.8xlarge tested here, support the enhanced 10 GB networking, however you must use the Amazon Linux AMI (or manually install the drivers) within a VPC. Because of the additional complexity of setting up a VPC, which isn’t necessary on any other provider, I didn’t test this, although it is now the default for new accounts. Even without that enhancement, the performance is very good, nearing the advertised 10 Gbits/sec. However, the consistency of the performance wasn’t so good. The speeds changed quite dramatically across the three test runs for all instance types, much more than with any other provider. You can use internal IPs within the same zone (free of charge) and across zones (incurs inter-zone transfer fees), but across regions, you have to go over the public internet using the public IPs, which incurs further networking charges. Google Compute Engine networking performance   f1-micro (shared CPU) n1-highmem-8 (8 CPUs) us-central-1a us-central-1a 692 Mbits/sec 2976 Mbits/sec us-central-1b us-central-1b 905 Mbits/sec 3042 Mbits/sec us-central-1a us-central-1b 531 Mbits/sec 2678 Mbits/sec us-central-1a europe-west-1a 140 Mbits/sec 154 Mbits/sec us-central-1b europe-west-1a 137 Mbits/sec 189 Mbits/sec Google doesn’t currently offer an Ubuntu image, so instead I used its backports-debian-7-wheezy-v20140318 image. For the f1-micro instance, I got very inconsistent iperf results for all zone tests. For example, within the same us-central-1a zone, the first run showed 991 Mbits/sec, but the next two showed 855 Mbits/sec and 232 Mbits/sec. Across regions between the US and Europe, the results were much more consistent, as were all the tests for the higher spec n1-highmem-8 server. This suggests the variability was because of the very low spec, shared CPU f1-micro instance type. I tested more zones here than on other providers because on April 2, Google announced a new networking infrastructure in us-central-1b and europe-west-1a which would later roll out to other zones. There was about a 1.3x improvement in throughput using this new networking and users should also see lower latency and CPU overhead, which are not tested here. Although 16 CPU instances are available, they’re only offered in limited preview with no SLA, so I tested on the fastest generally available instance type. Since networking is often CPU bound, there may be better performance available when Google releases its other instance types. Google allows you to use internal IPs globally — within zone, across zone and across regions (i.e., using internal, private transit instead of across the internet). This makes it much easier to deploy across zones and regions, and indeed Google’s Cloud platform was the easiest and quickest to work with in terms of the control panel, speed of spinning up new instances and being able to log in and run the tests in the fastest time. Rackspace networking performance 512 MB Standard (1 CPU) 120 GB Performance 2 (32 CPUs) Dallas (DFW) Dallas (DWF) 595 Mbits/sec 5539 Mbits/s Dallas (DFW) North Virginia (IAD) 30 Mbits/sec 534 Mbits/s Dallas (DFW) London (LON) 13 Mbits/sec 88 Mbits/s Rackspace does not offer the same kind of zone/region deployments as Amazon or Google so I wasn’t able to run any between-zone tests. Instead I picked the next closest data center. Rackspace offers an optional enhanced virtualization platform called PVHVM. This offers better i/o and networking performance and is available on all instance types, which is what I used for these tests. Similar to Amazon, you can use internal IPs within the same location at no extra cost but across regions you need to use the public IPs, which incur data charges. When trying to launch x2 120 GB Performance 2 servers at Rackspace, I hit our account quota (with no other servers on the account) and had to open a support ticket to request a quota increase, which took them about an hour and a half to approve. For some reason, launching servers in the London region also requires a separate account, and logging in and out of multiple control panels soon became annoying. Softlayer networking performance 1 CPU, 1 GB RAM, 100 Mbps 8 CPUs, 2 GB RAM, 1 Gbps Dallas 1 Dallas 1 105 Mbits/sec 911 Mbits/s Dallas 1 Dallas 5 105 Mbits/sec 921 Mbits/s Dallas 1 Amsterdam 29 Mbits/sec 61 Mbits/s Softlayer only allows you to deploy into multiple data centers at one location: Dallas. All other regions have a single facility. Softlayer also caps out at 1 Gbps on its public cloud instances, although its bare metal servers do have the option of dual 1 Gbps bonded network cards, allowing up to 2 Gbps. You choose the port speed when ordering or when upgrading an existing server. They also list 10Gbit/s networking as available for some bare metal servers. Similarly to Google, Softlayer’s maximum instance size is 16 cores, but it also offers private CPU options which give you a dedicated core versus sharing the cores with other users. This allows up to eight private cores, for a higher price. The biggest advantage Softlayer has over every other provider is completely free, private networking between all regions whereas all other provider charge for transfer out of zone. When you have VLAN spanning enabled, you can use the private network across regions, which gives you an entirely private network for your whole account. This makes it very easy to deploy redundant servers across regions and is something we use extensively for replicating MongoDB at Server Density, moving approx 500 Mbits/sec of internal traffic across the US between Softlayer’s Washington and San Jose data centers. Not having to worry about charges is a luxury only available with Softlayer. Who is fastest? Fastest (low spec) Fastest (high spec) Slowest (low spec) Slowest (high spec) Within zones Google Amazon Softlayer Softlayer Between zones Google Amazon Rackspace Softlayer Between regions Google Amazon Rackspace Softlayer Amazon’s high spec c3.8xlarge really gives the best performance across all tests, particularly within the same zone and region. It was able to push close to the advertised 10 GB throughput, but the high variability of results may indicate some inconsistency in the real-world performance. Yet for very low cost, Google’s low spec f1-micro instance type offers excellent networking performance: ten times faster than the terrible performance from the low spec Rackspace server. Softlayer and Rackspace were generally bad performers overall, but at least Rackspace gets some good inter-zone and inter-region performance and performed well for its higher instance spec. Softlayer is the loser overall here with low performance plus no network-optimized instance types. Only their bare metal servers have the ability to upgrade to 10 Gbits/sec network interfaces. Mbits/s per CPU? CPU allocation is also important. Rackspace and Amazon both offer 32 core instances, and we see good performance on those higher spec VMs as a result. Amazon was fastest for its highest spec machine type with Rackspace coming second. The different providers have different instance types, and so it’s difficult to do a direct comparison on the raw throughput figures. An alternative ranking method is to calculate how much throughput you get per CPU. We’ll use the high spec inter-zone figures and do a simple division of the throughput by the number of CPUs: Provider Throughput per CPU Google 380 Mbits/s Amazon 219 Mbits/s Rackspace 173 Mbits/s Softlayer 113 Mbits/s The best might not be the best value If you have no price concerns, then Amazon is clearly the fastest, but it’s not necessarily the best value for money. Google gets better Mbits/sec per CPU performance, and since you pay for more CPUs, it’s a better value. Google also offers the best performance on its lowest spec instance type, but it is quite variable due to the shared CPU. Rackspace was particularly poor when it came to inter-region transfer, and Softlayer isn’t helped by its lack of any kind of network-optimized instance types. Throughput isn’t the end of the story though. I didn’t look at latency or CPU overhead and these will have an impact on the real world performance. It’s no good getting great throughput if it requires 100 percent of your CPU time! Google and Softlayer both have an advantage when it comes to operational simplicity because their VLAN spanning-like features mean you have a single private network across zones and regions. You can utilize their private networking anywhere. Finally, pricing is important, and an oft-forgotten cost are the network transfer fees. This is free within zones for all providers, but only Softlayer has no fees for data transfer across zones and even across regions. This is a big saver. David Mytton is the founder and CEO of Server Density, a cloud management and server monitoring specialist. He can be contacted on david@serverdensity.com or followed on Twitter @davidmytton Featured image: Shutterstock/ssguyRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.What defines the key players of the IaaS industryWhat developers should know when choosing an MBaaS solutionCloud and data second-quarter 2013: analysis and outlook

Read More...
posted 11 days ago on gigaom
The Netflix-Comcast truce has demonstrated once more how crucial video has become for today’s internet. YouTube alone streams enough footage each month to theoretically entertain every single human alive for four hours. Facebook users spend an average of 84 minutes a month watching clips on the social network, topping five billion views in January. The data inside each clip and metadata about each and every viewer’s interaction with a video can make or break marketing campaigns. But are companies making use of the vast treasure trove of data that all those streamed videos give them? So far, the answer is no. Using big data to boost one’s sales and marketing activities may sound like old news, but most companies today don’t use the full suite of modern business intelligence (BI) tools at their disposal. Some have embarked on implementing the open source Hadoop framework for data warehousing, including newer iterations such as Impala that make up for the lack in speed of the initial Hadoop versions. Some companies are trying new approaches to turn the entire web into a data repository, connecting sources across the cloud to each other and to their various on-premise datasets to run complex queries in a browser. And some are betting on new appliances that supposedly make mining your data as easy as a search query. But most companies still struggle to make sense of the basic requirements for all the different big data technologies out there — from budget to necessary staff skills. They also need internal buy-in to connect entirely new data sources to their sales, marketing and other activities to get that 360-degree view of their value chain and operations that software guys have been promising. That’s a pity because mining video data is a particularly valuable asset. The foray into the rich data sets of social media and video lets companies large and small literally see more and sell more. Photo from Thinkstock/Oleksiy Mark Take one European enterprise my firm works with. This company noticed that its sales of one product had shot up and almost drained inventory in a few days. But why? When the sales team talked to the social media guys, they found out that a video about the new product had been viewed more than 100,000 times the day before the spike occurred. The firm used a team of two to pull together data across the web and inside its firewall: online orders and conversion rates, data from its YouTube and Vimeo accounts, plus Google Analytics and Facebook Insights. Turns out the bestseller story was a bit more complicated. YouTube views had indeed shot up, but they had only led to a 15 percent increase in orders. What really drove the unusually high sales was something else: the moment when die-hard fans started spreading the word. They shared the clip everywhere from Tumblr to Facebook and got their friends to watch it on mobile devices. Viral plus handheld generated a 40 percent sales increase, but that tidbit only showed up once all the dots were connected. The company went a step further and pushed this analysis to its salesforce users. It was less a pep talk than advanced prep work for the next launch. “This intel convinced us  to syndicate the same content on different channels, but to properly engage each type of audience, whether we’re talking to impulsive, Twitter-happy buyers, careful researchers on Quora, or collector types on Pinterest,” the marketing head told me. “For the next launch, we decided to focus on a mobile promotion that generated similar sales.” Mining video data is the next big thing in harnessing big data. It simply is too big a data pool to ignore it. YouTube alone has more than a billion unique viewers each month, 80 percent of them from outside the U.S. The number of subscriptions has tripled since last year, and 40 percent of all content is viewed on mobile devices. This is why the POV should meet the POS. Only when you mash up all these pieces of information, and do so as quickly as possible, do you stand a chance to establish cause and effect. It might not sound as sexy as “big data,” but mining video clips brings enterprises one step closer to understanding marketing success — and how to repeat it. Even better, there are tools out there that do not require nerds. It would be wrong to declare one data source is suddenly more important than all the others, but companies need to put the spotlight on video and marry those insights with bone-dry sales and marketing numbers. Rachel Delacour is CEO and co-founder of cloud business analytics pioneer, BIME Analytics, who also holds an advisory role on cloud computing standards with EuroCloud.  Follow her on Twitter @bimeanalytics. Feature image from Shutterstock/photosani

Read More...
posted 11 days ago on gigaom
My seven years on the Internet Engineering Task Force (IETF), from 2003 to 2010, definitely taught me interesting things, including how to get a group of people to deliver when you had no control over their jobs. As co-chair of the Network-based Mobility Management (NETLMM) working group, I led one of the rather contentious working groups at the IETF. We managed to deliver relevant standards and actually brought closure to the working group so we could move on. Overall, my experience with IETF has positively contributed to my skills in leadership, consensus building, design thoroughness and seeing the big picture. It also gave me the opportunity to interact with incredibly talented people from diverse organizations and to really understand how the Internet came to be what it is today. And yet, several years ago, when I was nominated for the Internet Architecture Board, I decided it was not for me. Not long after, I took an indefinite leave of absence from the IETF and have not returned since. There are times I feel guilty about not giving as much to the Internet anymore, and I take great pride and consider it my good fortune to have served on committees like the Security Directorate, reviewing contributions to ensure that they don’t break the security of the Internet. However, I find myself less distraught as I try to serve the Internet through other practical contributions from outside the fences of the standards organizations. (I’ve also had my share of experiences at other standards organizations like the IEEE, 3GPP and 3GPP2.) So, why did I actually stop contributing to standards definitions? The primary one is the fact that while the pace at which standards are written hasn’t changed in many years, the pace at which the real world adopts software has become orders of magnitude faster. Standards, unfortunately, have become the playground for hashing out conflicts and carrying out silo-ed agendas and as a result, have suffered a drastic degradation. Consider the Internet of Everything (IoE), one of the hottest topics of today. The Internet of Everything, you say? Surely, this must be built on interoperable standards! How can you possibly be talking to everything, from lightbulbs to toothbrushes to phones without interoperability? That sounds absurd! And you would be right; there is a need for interoperability. But what is the minimum we need? Is it IP? Is it some link layer defined by IEEE, such as 802.15.4? Or Bluetooth 4.0? HTTP perhaps? It is useful to remember that none of these are fully sufficient to have IoE working in a meaningful way that is of some use to the user or the end consumer. And yet, while we wait on some inevitable PHY (physical) and MAC (link layer) protocols that must be defined by IEEE, once that is in place, we are ready to roll. Running code and rough consensus, the motto of the IETF, used to be realizable at some point. Nowadays, it is as though Margaret Thatcher’s words, “consensus is the lack of leadership” have come to life. In the name of consensus, we debate frivolous details forever. In the name of patents, we never finish. One recent case in point is the long and painful codec battles in the WebRTC working group. I have tremendous respect for a good number of people that participate at the IETF and other standards organizations that continue to make the Internet functional and sound. I value interoperability and hope that we will get it together for sake of IoE, because it actually is going to be hard to realize that vision without good interoperability. But I look across the board at IEEE, IETF, SDN organizations and the like and feel that these organizations need a radical restructuring effort. They need to be shaken up, turned on their heads and revamped in order to have monumental impact on the Internet once again. For one, we all need to agree that everyone gains from the Internet being unencumbered and that interoperability only helps the Internet serve all our needs better. More critically, I believe it is time to revisit the tradeoffs between consensus and leadership; they absolutely should not be considered to be one and the same. This will be tricky and will require careful balancing of undesirable control and faster decisions. Most likely, a change like this will require a complete overhaul of the leadership selection process and structure. But, without this rather drastic shake up, I’m afraid we are widening the gap between standards and reality. The world is marching towards fragmented islands of communication connected via fragile pathways. It is inevitable, as this is the fastest path to market. Unless these standards organizations make radical shifts towards practicality, their relevance will soon be questionable. For now, some of the passionate people will go off and try to make real things happen elsewhere. I feel like a loser for saying “I quit writing standards”; kudos to the people that are sticking with it to make the Internet a better place. Some day, hopefully, we will all be better off because of it! Vidya Narayanan is an engineer at Google. With a long history in mobile, she is obsessed about enabling amazing mobile experiences. She blogs at techbits.me and on Quora. Follow her on Twitter @hellovidya. Featured image courtesy of Shutterstock user almagami

Read More...
posted 11 days ago on gigaom
There is an almost-secret battle going on behind the scenes of the mobile platform wars; and that is the battle for mobile browser market share. What makes the battle to become the dominate browser on mobile different than on the desktop is that the battle lines are drawn predominately along device and platform boundaries.  When you take a closer look at the race to become the top browsers on the iOS platform, you will find is that it is features rather than speed that users are choosing. Mobile browser market share While comScore data may show that more people are using their mobile devices than they are using their personal computers, this does not seem to apply when it comes to browsing the web. Looking at data collected from StatCounter: 24.9 percent of all web traffic is coming from mobile devices in April 2014. This is up from 13.9 percent in April 2013. While this does show that mobile browsing will likely overtake desktop browsing sometime in the future, it has not happened quite yet.  Any time markets grow this fast, there will inevitably be competition and a race to the top. When it comes to mobile browser market share, the dynamics of changing market share is indicative of desktop browsers wars of the past. Looking at the top 9 mobile browsers from the last 12 months, you can see that Chrome is fast becoming the dominant browser across all of mobile, climbing from 2.29 percent in April 2013 to 13.59 percent in April 2014, overtaking Opera in the number 3 position according to StatCounter. Benchmarking results on iOS When choosing which browser to use on iOS, the following data shows is that it can not be performance that is the driving factor. This is interesting as browser speed continues to be one of the major factors influencing which desktop browser to use. For the benchmarking tests, the iPad version of each browser was used on an 128GB iPad Air running the latest iOS 7.1 update. Three different test suites were used to test the performance of the nine different web browsers;Sunspider v1.0.2, Octane 2.0 and V8 Benchmark Suite v7.   Looking at the results, you can see what Jay Sullivan, Mozilla’s vice president of product, was referring to back in March of last year. You may recall that Mozilla pulled its Firefox Home app from the App Store and halted all development of a iOS browser due to the fact that Apple restricted third-party browser developers to using the UIWebView rather than there own rendering and javascript engines. As a result almost every third-party browser tested lags behind Apple’s own Safari mobile browser where performance is concerned. The results show that each browser, including Google’s own Chrome browser, perform at nearly identical performance levels. That is, until you look at the results coming from the Puffin mobile browser for iOS. Puffin outperformed Safari in all three tests. Another notable exception was the fact that Opera was unable to complete any of the benchmark tests. Seeing as how Opera for iOS has not been updated since October of 2012, it is no wonder that it could not execute any of the latest tests. Uniques features drive choice Puffin Web Browser ($3.99, Universal) has been able to achieve its wicked fast performance on iOS due to the fact that it is not running on iOS. Puffin is a browser that utilizes cloud-computing to render web pages. Not only does the cloud behind Puffin make Puffin a fast performing browser, it also allows Puffin to support Adobe Flash Player 11.9. To help users utilize flash sites that were originally built for the mouse, Puffin has a virtual gamepad when playing online games built with Flash, as well as a virtual trackpad that simulates all mouse operations like a personal computer. Puffin allows you to change your user agent setting which makes it a good browser choice when you are trying to replace your personal computer with your iPad. While it can sync your browser tabs with Chrome using your Google account, it does not sync your bookmarks or history. Google Chrome for iOS (Free, Universal) definitely has its appeal to users that are using the desktop version of Chrome, and there are a lot of users using Chrome on the desktop. Chrome is the dominant browser used on the desktop with a commanding 46.49 percent share on StatCounter. Being able to sync your history, bookmarks and tabs across all of your devices and desktop can certainly be more important than having the fastest browser. Google really has done a great job at integrating their online services into the apps that they build for iOS. Many third-party apps now support “Open in Chrome” as one of their supported sharing options. Safari Mobile (free, Universal) can sync your bookmarks, reading list, open tabs and history with all of your other devices, including Safari on OS X. What you may not know is that you can sync your bookmarks with Internet Explorer, Firefox, or Chrome on Windows using the iCloud Control Panel 3.1 for Windows. To do so you do need to create an iCloud account, but you do not have to use iCloud’s email services. In fact, you can use your any email address when creating the iCloud account that you want to sync your bookmarks with. That way you can use Safari’s fast browser on your iOS device, and any browser on your Windows desktop. iCab Mobile Web Browser ($1.99, Universal) has one unique feature that may appeal to anyone that shares their iOS device with others. It can support multiple users on the same device.  With iCab you can add accounts that maintain their own preferences, profiles, and browsing history. Like Chrome, iCab has also done a great job when it comes to partnering with other third-party developers that supporting iCab as your device browser of choice. It also has enhanced support for filling lout forms online as well as uploading and downloading content from the web. Dolphin Browser (Free, iPhone Free, iPad) has extensions for Safari, Chrome and Firefox that enable you to sync history, bookmarks, passwords and open tabs on your devices and your desktop that it calls Dolphin Connect. It has its own integrated voice search, Sonar, that you activate by shaking your device. You can also use gestures to launch your favorite URLs. If there happens to be another Dolphin user near by, you can quickly share a link with them using the WiFi broadcast feature. When it comes to creating a rich set of unique and innovative browsing experience, Dolphin has really outdone itself.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.A demographic and business model analysis of today’s app developerDevelopment strategies for the app-developer communityCES 2012: a recap and analysis

Read More...
posted 11 days ago on gigaom
RightScale has been managing public cloud resources for companies since TK, and thousands of users later, it appears to have a pretty good sense of what’s happening in the cloud space. Recently, the company released its State of the Cloud report, including a survey of more than 1,000 companies about the clouds they’re using and plan to use. Kim Weins, RightScale’s vice president of marketing, came on the Structure Show this week to talk about the results. Here are the highlights of that interview, but you’ll want to hear the whole thing for all of Weins’s thoughts on which cloud platforms are most popular (as well as for my and Barb Darrow’s takes on the week in cloud and big data news). And if you’re really into learning about the future of cloud computing — the business models and the architectures — make plans to attend our Structure conference June 18 and 19 in San Francisco,. It features a who’s who list of cloud executives, architects and users from companies such as Google, Amazon Web Services, Airbnb and more. Download This Episode Subscribe in iTunes The Structure Show RSS Feed VMware: Lots of products, lots of lock-in and lots of interest Of course, most respondents of the RightScale survey were using Amazon Web Services. However, Weins explained: “If you look within the enterprise segment … we saw that the vCloud Hybrid Service from VMware came in No. 2. That surprised us a little bit and we were a little bit suspicious of that for two reasons. One is, it’s a pretty new service … . And the second is that people get often very confused about the different VMware products and which ones they’re using. We call it ‘vSoup.’ They’re not sure. You put ‘v’ in front of something, and if they’re using anything VMware they say ‘yes.’” When RightScale did some follow-up calls to determine whether respondents actually were using vCloud Hybrid Services, it found that more than half were experimenting with it, some others thought they were using it, and some others were just confused about which VMware products they were actually using. And as RightScale moves more into managing private cloud environments, as well, Weins said it’s seeing a lot of interest from customers that want to turn their vSphere servers into a cloud. So, RightScale has developed a lightweight appliance that helps “cloudify” vSphere so it can be managed as part of the RightScale service. However, she added, as much interest as their is from VMware shops who want to bring that trusted environment with them into the cloud, there’s also a concern: “What we’re actually seeing more of there is people who are concerned as they move to cloud, they don’t want to be in the all-VMware, all-the-time-forever camp. They want to preserve their options. They want to know that they’re not locked into always using everything VMware, whether it’s the hypervisor or other services, because they know that that’s a costly option. So I think that people are being very cautious about how they dip their toes in the water there.” OpenStack: Yes, it might matter “Definitely a a lot of interest. Definitely a lot of interest,” Weins said of OpenStack. “In the private cloud world, they’re No. 2 really in adoption so far in terms of people running applications, but they have the most in terms of people experimenting or planning to use it.” No. 1? VMware. If RightScale’s data is indicative of the IT world at large, that puts a lot of pressure on the OpenStack community to gets its act together. “I think the one question mark is will people overcome the learning curve next year associated with OpenStack and the complexities of implementing it,” Weins said. “… I think the jury is still out whether a lot of those experimenters are going to take the leap in the next year or two, or if it’s going to take longer.” What of Google, Microsoft and the telcos? In the RightScale survey, Microsoft Azure and the Google Cloud Platform — the platform-as-a-service and infrastructure-as-a-service options — both had more people interested in using them than actually using. But that interest level is very high for both. “It was very interesting to see the interest in the PaaS options, both from Google as well as from Azure,” Weins said. “… Now the difference between those two players is that within the larger companies Azure was stronger in terms of mindshare, and within the small and medium-sized companies, Google was stronger in terms of the mindshare.” As for telcos like Verizon, AT&T and CenturyLink, which have invested heavily in their cloud services and are often suggested as natural fits to dominate enterprise cloud workloads, well … Weins said RightScale occasionally comes across telco users interested in using its management service. In the survey, “a handful of people” mentioned those providers in the “Other” category.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How software-defined networks can meet the needs of IT organizationsMigrating media applications to the private cloud: best practices for businessesThe Structure 50: The Top 50 Cloud Innovators

Read More...
posted 11 days ago on gigaom
When you think of Android, you probably don’t think of Amazon. You should. Amazon has slowly built up a product line off of the open Android software, starting first with Kindle Fire tablets and more recently with its Fire TV set-top box. The third pillar of “fire” is shaping up to be Amazon’s smartphone with an expected June announcement. There’s no official name for the handset yet — I like the Fire Fone, but that’s just me — which has long been rumored. Sources told the Wall Street Journal this week that Amazon’s phone will go on sale three months after its introduction, meaning a September launch month. Few details have leaked, save for the recurring rumor of multiple eye- and head-tracking cameras in the phone and a glasses-free 3-D screen.The former makes more sense to me than the latter as Amazon can better learn where consumers are looking when browsing products on Amazon’s website. While no other details surfaced this week, I’d be shocked if the phone ran anything other than Android. It simply makes sense given that Amazon already has a mobile app store filled with 200,000 titles for its Fire OS tablets; surely the phone too would run the same software. And that does nothing to help Google because Amazon’s fork of Android doesn’t include any Google services. Amazon, not Google, reaps all the rewards of gathering personal data from its devices. Amazon Instant Video on iOS devices I’m also expecting Amazon’s phone to have another key difference from today’s currently available Android handsets: Amazon Instant Video. Amazon released that app for Apple iOS devices but never for the Google Android platform. Of course, Android has its own share of “exclusives”; the upcoming Android Wear smartwatches won’t likely work with Amazon’s phone. Instead, you’ll probably need a Google Android device for the LG G Watch or Moto 360 when they arrive in the next few months. Those thinking these smartwatches would be incredibly expensive got some good news this week: LG confirmed a £180 price for the G Watch in the U.K. That suggests a price of under or near $200 for the watch in the U.S. as device prices typically don’t get converted by currency. At that price, the G Watch would fare well against other contenders for your wrist, such as the $249 Pebble Steel. Samsung too has wearables that work with Android phones — if they have the Samsung name on them — even if the smartwatches themselves don’t run Android. I’m currently taking the Gear Fit for a spin and shared some preliminary thoughts and details on the device. I think the hardware is outstanding but Samsung has room for improvement on the software side. The company seems to have less of a challenge with the Samsung Galaxy S5, however, which gained mostly positive reviews this week.   The phone is now available for sale and I have a review unit in hand so I’ll have more to say in the coming days about the Galaxy S5 and Gear Fit. I already have little doubt that the hardware will leave people lacking: The phone is fast, takes excellent pictures so far, and has a fantastic display. Samsung has listened to consumers (and even some reviewers) who suggested the interface on the prior model was a bit clunky and non-intuitive. I’ll find out how much better the new phone is and share details.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How companies can grow by moving into newer, bigger marketsHow new devices, networks, and consumer habits will change the web experienceOpportunities for living room application platforms

Read More...
posted 11 days ago on gigaom
Today’s package is whimsical, mostly because I am in that kind of mood. One Mad Men episode worth watching from each season that is worth rewatching. Thanks New York magazine for basically making sure I don’t do anything before Mad Men mania sweeps over me & Twitter. Why we are in a new gilded age. Paul Krugman reviews Capital in the Twenty-First Century by Thomas Piketty. Letterman’s last great moment: Outside of John Gruber’s Mac-related stuff,  Bill Simmons’ pop culture commentary is must read for me, and this piece about David Letterman doesn’t disappoint. Just cheer, baby: The life of a cheerleader isn’t fun and games. The hard life leads to a Raiderette suing the football team. Is there anything beyond quantum computing? Scott Aaronson tries to answer the question. More time is better than more money, says Kevin Kelly. I agree with him, but only when I have enough money in my bank. Pimco’s Bill Gross picks up the pieces. Sheelah Kolhatkar tells the story of the investing legend who has been dealing with negative press following the exit of his CEO Mohamed El-Erian.

Read More...
posted 11 days ago on gigaom
After months of rumors, Amazon’s smartphone ambitions are reportedly set to take shape in June. That’s when the company will introduce its smartphone, according to a Wall Street Journal report published Friday. Amazon’s phone is expected to have multiple cameras and a glasses-free 3D experience when it goes on sale in September. Much of the Journal’s report reiterates prior leaks, so there’s not much new information here save for one of the most important aspects: An actual release date, or at least the months of Amazon’s phone announcement and launch. As far as those cameras? They’ll “employ retina-tracking technology embedded in four front-facing cameras, or sensors, to make some images appear to be 3-D, similar to a hologram,” said the Journal’s sources. A September sale would likely pit Amazon directly against a new iPhone (or two) when vying for consumer purchases. Unlike Apple, however, Amazon typically doesn’t seek to earn profits from hardware sales but instead offers devices at lower prices and make money from related software, services and goods sold through Amazon.com. The Journal’s sources said that Amazon has been showing off early releases of the phone hardware to developers, likely to build interest. The company already woos developers to its Amazon AppStore, which hosts modified Google Android applications that run nicely on the company’s Kindle Fire tablets. I suspect Amazon will continue to build upon the open-sourced version of Android for its phone, just as it does with the Kindle Fire and new Fire TV. Doing so keeps software development costs down as the AOSP, or Android Open Source Project, offers the basic building blocks of smartphone software for free. In fact, with the Kindle Fire tablets, Amazon already has done much of the software work that’s needed for a phone. There’s a browser, email app, and support for third-party software. Adding cellular radios and a corresponding phone application isn’t a simple task, but the heavy lifting has already been done. One bit of software I anticipate will surely be on Amazon’s phone is Amazon Instant Video. Although nearly any Google Android device can play music through Amazon’s MP3 player or show e-book content in the Amazon Kindle app, not a single Android phone or tablet currently supports movies or television content through Amazon. The company has never released a version of Instant Video for Android, so keeping it for its own phone will certainly stir up a little demand.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Takeaways from mobile’s second quarterThe evolution of consumer-media cloud storage4 takeaways from Google I/O

Read More...
posted 11 days ago on gigaom
As regulators attempt to sift through the possible public harms and benefits of Comcast’s $45.2 billion plan to buy Time Warner Cable, we thought it was worth showing that if the deal takes place it could lead to a significant jump in the number of broadband subscribers getting a data cap. If we add Time Warner Cable’s 11.6 million broadband subscribers from the end of 2013 into the mix of customers with caps, the total percentage of U.S. homes that have some type of cap or other limit on downloads rises to 78 percent up from 64 percent today. That’s a significant jump, especially after the number of homes with caps plateaued after 2011 when AT&T hopped on board the bandwagon that Comcast started driving in 2008. A side note for data nerds: The percentage of capped consumers could be a bit higher because the Leichtman Research Group data we use to calculate subscribers only accounts for 93 percent of the total number of broadband subscribers. Now, it’s not to say that we will definitely reach that 78 percent, given that Comcast has pledged to divest itself of 3 million pay TV subscribers in order to help get the deal through regulatory screens. However, it’s unclear which markets might be divested and whether or not those markets would go to a buyer that also has a cap. Of the major cable providers in the U.S. only TWC and Cablevision don’t have caps. And even if you take out those 3 million broadband subscribers entirely, we’re still looking at 74 percent of the U.S. broadband subscribers hitting a cap. As a Time Warner Cable customer who currently doesn’t have a broadband cap, I can’t say that I view this deal as a good thing. I imagine that the 10 to 13 percent of U.S. homes who would join the capped majority would feel the same. There’s still time for the FCC to take a harder look at caps — or as Comcast calls it, a data threshold. For those who want to see who’s capping their broadband, check out our chart from November 2013.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How consumer media consumption shifted in the second quarterWhat the shift to the cloud means for the future EPGHow the truly smart home could finally become a reality

Read More...
posted 11 days ago on gigaom
Aereo has plans to expand to 50 cities within the next 18 months if it wins its Supreme Court case, reports the Houston Chronicle, which recently got a tour of the Aereo facility there. The company is still keeping mum on current subscriber numbers, but CEO Chet Kanojia told the Chronicle that it’s already profitable in Houston, where it has hardware to serve up to 40,000 subscribers. Aereo has to defend itself in front of the Supreme Court in two weeks. Story posted at: houstonchronicle.com To leave a comment or share, visit: Aereo wants to expand to 50 cities if it prevails in courtRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.The biggest third-quarter events in the consumer spaceWhat the shift to the cloud means for the future EPGWhat the shift to the cloud means for the future EPG

Read More...
posted 12 days ago on gigaom
In two months a Vancouver-startup called Mojio will start selling its connected car module, a plug-in device that connects your car to the cloud via a T-Mobile’s network and your phone via Bluetooth. While there are a several gadgets in the market that promise to turn your unconnected car into a connected one, Mojio has an interesting take on the market. It wants to turn its plug-in car module into a application development platform. We first reported on Mojio back in 2012 when it kicked off an Indiegogo campaign for its module, which plugs into the onboard diagnostic (OBD) port in all cars made in the last 18 years. Like competing devices Mojio’s module can upload acceleration, braking and engine alert information into your smartphone, but Mojio layered on a bunch of other apps that integrate that driver data with social networking, contacts, calendar and SMS features on your phone. Mojio launched the device in beta with its Indiegogo contributors last year, and in October it raised a $2.3 million seed round led by Relay Ventures. Now it’s getting ready to release its commercial module to the public with several upgrades it’s hoping will set it apart from competitors like Automatic and Zubie, CEO and co-founder Jay Giraud told me an a recent interview. Most significantly Mojio is opening up APIs to developers, letting them design apps for the gadget the same you’d design apps for iOS or Android. Those apps can tap into all of the vehicle diagnostic and location data Mojio draws from the car’s control access network (CAN) as well as social networking and communications tools Mojio has built into Mojio’s cloud-based platform. Those apps can be added to a user’s module from what amounts to connected car app store, Giraud said. Giraud said Mojio is working with multiple developers for its upcoming launch. One developer he did name, however, is Glympse, which is looking to integrate cars into its location sharing app. Right now Glympse lets you share your location temporarily from your smartphone, but inside of Mojio it becomes a beacon that would allow you to keep constant tabs on the location of your car.   Second, Mojio is partnering with T-Mobile US to connect its module to its HSPA+ network and sell the module through T-Mobile’s retail channels. Mojio hasn’t finalized the exact pricing details, Giraud said, but its looking at two separate payment models: one in which you buy the device for $149 with no subscription fee whatsoever (including no data connectivity charges) or a monthly subscription fee around $6, which includes access to both its cloud-based services and network access. Customers who signed up for the monthly plan would pay nothing for the hardware, Giraud said. Mojio finds itself going up against a growing number of in-car module makers and an app makers, each with a slight different approach to connected vehicles. Zubie also offers mobile network connectivity charging $100 a year for a subscription. Automatic relies solely on Bluetooth to communicate with your phone, while Dash recently launched its own software-only service that uses any off-the-shelf to diagnostic interface gadget to connect your smartphone to the car. By launching with a stable of third-party apps, Mojio hopes to differentiate itself from that pack. That strategy means attracting developers, which themselves are attracted to devices that ship in large volumes. While that developer community could take a while to build, Mojio’s module will have plenty  of functionality to make it useful in the interim. Mojio’s future roadmap also includes new hardware advancements, Giraud said. Mojio is looking into a building a module that includes both LTE and Wi-Fi connectivity, which would not only connect cars to the internet but the tablets, smartphones and other Wi-Fi gadgets passengers bring with them.    Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Gigaom Research predictions for 2014Why LTE in the iPhone mattersHow new devices, networks, and consumer habits will change the web experience

Read More...
posted 12 days ago on gigaom
In the business world, the voice is a powerful thing. In meeting rooms, offices and conference calls, it’s how ideas are generated, mandates given and gauntlets thrown down. Yet, somehow, the record of all these discussions doesn’t quite do them justice: messy handwritten (and probably incomplete) notes, typed meeting minutes that don’t distinguish idle chatter from meaningful business or, worse, no record at all. Thanks to advances taking place in computing and machine learning, that’s all about change. Take, for example, a startup called Gridspace that wants to make meetings more productive by outsourcing note-taking to a machine. It’s a challenging problem to solve — any solution must provide a seamless experience, as well as be accurate — but the company is trying to do it right. It has built product that bundles smart hardware and applications with several flavors of speech recognition, voice recognition and natural language processing. The most noticeable piece of the puzzle is the hardware — a simple, small recording device called the Memo M1 that sits on a desk or table. It’s always on, although its ambient light and motion sensors let it kick in only when someone is actually in the room. It has radio sensors to help determine who’s in the room based on their mobile phone fingerprints, although voice recognition helps makes this more accurate as does pre-planning the meeting using the Memo app and listing the participants. The Memo service works with conference lines, as well (it can be set up to automatically call participants) and there’s a mobile app available for recording conversations on the road. After a meeting is done, Memo will email everyone the highlights of the meeting and provide them an opportunity to go through and comment on or flag certain parts. The next day they’ll receive a fuller digest, complete with that post-facto information. At any time, participants can listen to the highlights of the meeting, which presumably are important points or action items, or they can hear the whole thing. They can search for specific parts by word or person. The Memo mobile app. Source: Gridspace Gridspace CTO Anthony Scodary described the user experience design as being focused on minimizing changes to how we go about our days in the office. Set up to its fullest potential, Memo users don’t have to press a button, set up something in an app, or even speak a command at something to take advantage of the service. “It’s really just [about] designing interfaces … that make something that you don’t have to change your natural behaviors much,” he said. Getting it right means getting NLP right As seamless as the experience might be, though, it’s Gridspace’s work on natural language processing and speech recognition that could make or break the company. All the automation and search capabilities in the world don’t mean much if a system designed to capture meetings can’t understand what’s happening or what’s being said. And after all, as Scodary acknowledged, “The end goal [of Memo] is to generate what is essentially the highlight reel of a meeting.” Memo has several methods for deeming what might be important, ranging from certain keywords being spoken (e.g., “This is important.”) to someone manually pressing a button on the M1 device to flag it as important. Even changes in volume or lots of people talking over each other might indicate a key part of the conversation. However, as with many machine learning systems today, it’s the input of humans that will help train Memo to be as accurate as it can be, Scodary explained. The more that people go through afterward and verify the system was correct, or flag important parts it missed, the smarter it gets. When someone “inputs unambiguously that something is important,” he said, Memo analyzes the context around those sections and readjust the weights in its algorithms accordingly. Pressing to flag content or mute the recorder. Source: Gridspace Out of the boardroom and into the hallway If Gridspace, which is still in the process of closed pilot projects and taking reservations for its M1 devices and mobile app, can pull this off, it could have promise even beyond the conference room. Scodary envisions a future where people have Memo devices sitting on their desks, ready to capture an impromptu brainstorming session or maybe just a short chat about the all-hands meeting earlier in the day. “We’re very interested in those three-minute meeting between your other meetings,” Scodary said. (And don’t worry: there’s a mute button if you’re going to complain about the boss, and Scodary said the company is working on features for voice commands to strike previous comments and to delete parts of a meeting that has already happened.) Frankly, this vision is the kind of thing one can see a company like Microsoft or Google chasing, too, as they strive to own productivity by owning the crossroads of collaboration, communication and devices. This type of technology could find its way into an already sensor-packed smartphone, tablet, desktop or even wearable — Intel recently showed off a new mobile processor designed with voice recognition in mind — and integrate with existing office suites and meeting applications. Their teams of artificial intelligence researchers – who have already made speech recognition commonplace on smartphones and gaming systems, and who are advancing the the state of the art in language understanding – could help make such a system faster, more accurate and even predictive. At home or in the office, our voices could soon be just as important inputs to our computers as our keystrokes. Once we figure out how to avoid putting our collective foot in our mouth, we’ll probably be thankful for it.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Why we must apply big data analytics to human-generated dataSiri: Say hello to the coming “invisible interface”Sector RoadMap: Social customer service in 2013

Read More...
posted 12 days ago on gigaom
The National Security Agency has known about the Heartbleed bug, which has compromised two-third of the world’s websites, for over two years, and has been actively trying to exploit it, according to reports. The revelation, which is likely to outrage a security industry already furious at the NSA, comes by way of Bloomberg, which cites two unidentified sources and reports: “Putting the Heartbleed bug in its arsenal, the NSA was able to obtain passwords and other basic data that are the building blocks of the sophisticated hacking operations at the core of its mission, but at a cost…The agency found the Heartbeat glitch shortly after its introduction, according to one of the people familiar with the matter, and it became a basic part of the agency’s toolkit for stealing account passwords and other common tasks.” The news comes as companies and governments are still reeling from last week’s disclosure of Heartbleed, which lets attackers penetrate OpenSSL, the open source protocol used to encrypt passwords and other sensitive data. The vulnerability has exposed companies like Yahoo and Google, as well as hardware providers like Cisco, and led the Canadian government to temporarily shut down its tax preparation service. For now, however, it’s not clear how much actual damage has been done — or if only a handful of people, including those at the NSA, knew about the vulnerability. Some reassurance came today when security service CloudFlare said it is unlikely that hackers have been able to use Heartbleed to obtain private SSL keys used by websites. Companies have been actively patching their sites since last week’s disclosure. While Heartbleed represents a useful weapon for the NSA to spy on its opponents, the agency’s failure to disclose it will anger those who believe that the U.S. government should focus on defensive measures like encryption and security — rather than using compromised standards as a means of attack. The NSA is still under criticism following disclosures by former contractor Edward Snowden that it deliberately introduced weaknesses into other global encryption standards.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Who to watch in the growing European cloud marketHow data can create security in the modern enterpriseHow Hadoop passes an IT audit

Read More...
posted 12 days ago on gigaom
Chromecast owners just got a few more ways to beam audio to the big screen: Player FM, a podcast app and cloud service that we previously covered on Gigaom, added Chromecast support to its Android app Friday. Also now Chromecast-capable is Rocket Music, an Android music player that includes features like an equalizer and lyrics viewing. Don’t want to listen to your podcasts or music on your TV? Then you can always turn Chromecast into a networked audio player.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Why the TV industry matters for GoogleHow mobile will disrupt the living room in 2014Report: The Live-Stream Video Market

Read More...
posted 12 days ago on gigaom
The University of Southern California’s robots usually lurk in dim basement labs and rooms tucked at the end of winding corridors. They didn’t actually emerge on Thursday, but the public was invited inside for an intimate look at what the school’s engineering and artificial intelligence experts have been building. First, meet the ARM robot, which uses a camera and depth sensors to see an object and pressure pads on its hands to register when it has grasped an object. This cute little guy is the NAO robot, which pops up at a lot of robot expos doing very different things. I’ve seen it play soccer and act as a social companion. But at USC it was actually leading an exercise session, inviting its observers to do lunges, squats and jumping jacks. It acted out each movement. USC’s other interactive robots included Romibo, dressed up like a dragon… …and the school’s Bandit-II robot, which invited participants to copy its movements. If they messed up, it made them start all over again. Bandit-II’s lips and eyebrows move, giving it a wide range of emotions. One of  the more unique robots was the EcoMapper: an autonomous underwater robot that can collect data on water quality and map the floors of bodies of water. The Zeus 3D printer also made an appearance. When I wrote about the Zeus last year, most of the images were renderings. But now the machine is very real. I saw it print and scan, and got a glimpse of the beautiful user interface its team promised. National Robotics Week events are going on through Sunday all over the country. Check out events near you here.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.GigaOM Research highs and lows from CES 2013Legal challenges and opportunities for 3D printingBitcoin: why digital currency is the future financial system

Read More...
posted 12 days ago on gigaom
With the Samsung Galaxy S5 now available, PayPal is making good on its promise to use the handset’s fingerprint reader. The company released mobile apps specifically for the Galaxy S5 and Samsung’s latest wearables on Friday. Using the phone app, you can log in to your PayPal account with a fingerprint scan instead of a typed password and make payments online or at participating retail locations that access PayPal payments. PayPal actually announced the software in conjunction with the Galaxy S5 introduction at February’s Mobile World Congress. Until now, however, no devices were available to use the app. Here’s a short demonstration of how the PayPal app works: The idea of using a fingerprint for account authentication over a typed password is rather timely, given how many sites are now affected by the massive HeartBleed security flaw may have exposed passwords on two-thirds of the world’s servers. Clearly, neither PayPal nor Samsung knew this would happen when they announced the mobile payment feature in February. The situation could bring awareness to Samsung’s newest phone since it uses biometrics instead of a password. Even if that fingerprint data is stored on PayPal’s servers, they aren’t affected by HeartBleed according to LastPass. Samsung’s newest handset isn’t the only device that can use a new PayPal new app, however. PayPal is available on the Samsung Gear 2 smartwatch and Gear Fit wearable so you can make payments, redeem offers and receive payment notifications on your wrist.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Noteworthy mobile developments from the third quarter 2013Consumer privacy in the mobile advertising eraBitcoin: why digital currency is the future financial system

Read More...
posted 12 days ago on gigaom
We’re getting closer to a Chrome OS tablet thanks to Lenovo, which is showing off its touchscreen convertible Thinkpad Yoga 11e. The $349 Chromebook arrives in June and although it’s geared for the education market, Lenovo is taking a cue from Dell and planning to sell the Yoga 11e to consumers as well. Lenovo announced the device back in January and is now getting ready for the product launch. Brad Linder of Liliputing got a chance to use an early prototype — that’s why the touchscreen doesn’t work 100 percent for him — and shared this video demonstration of what to expect from the Chromebook. Clearly, the Yoga 11e isn’t the first touchscreen Chromebook to hit the market. Google’s Chromebook Pixel claimed that prize when it launched a year ago and Acer followed with a lower-costing touchscreen model of its C720 Chromebook. Lenovo can claim to have the first convertible touchscreen Chromebook, however because like other Lenovo Yoga products, you can fold the screen all the way to the back of the laptop. That makes the on-screen keyboard in Chrome OS a bit more valuable because the Yoga 11e can essentially be used like a Chrome OS tablet as needed. Or you could flip the screen back up and use the traditional ThinkPad keyboard. I suggested this very use case earlier this year noting that it would be more likely to see this type of form factor instead of an actual Chrome tablet because the Chrome OS isn’t yet touch optimized.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Cloud and data second-quarter 2013: analysis and outlookHow the mega data center is changing the hardware and data center marketsGoogle Chrome OS: What to Expect

Read More...
posted 12 days ago on gigaom
Remember Staples? It’s the latest company considering offering 3D printing to customers, after announcing today that it has partnered with 3D Systems (DDD ticker) to test printing stations in two stores. This isn’t Staples’ first venture into 3D printing. Some of its European stores began offering to print items for customers on Mcor printers, which build 3D objects with layers of paper, last September. But the new U.S. centers are meant to be more experiential. Someone who is totally new to 3D printing can walk in and, with the help of staff members, design an object on the spot or print a premade design. Staples is also offering photo booths where customers can take a 3D picture of themselves and then print it. A 3D photo booth in a Staples store. Photo courtesy of Staples. The service will compete with UPS’s rollout of testing locations last year. While Staples’ 3D Systems printers are desktop, consumer-oriented machines, UPS is offering professional printers from 3D Systems rival Stratasys. As a result, Staples might be more appealing to an individual looking to print something for personal purposes, while UPS is more business and artist oriented. The “Kinkos model” is a potential direction in which 3D printing could move. People might not want to invest in their own desktop 3D printer; instead, they could travel to a central location when they need to print an object. It ruins some of the on-demand benefits of 3D printing (“My spatula broke. I’ll print a new one!”), but still allows people to make highly customized objects without paying hundreds or thousands of dollars for a personal printer. Staples’ plan to have customers come into the store and initiate a print job themselves instead of emailing it in ahead of time is unusual and might be off-putting, as it usually takes half an hour to hours to print a single object. But if it works, there would be a lot of people wandering around Staples stores with time to burn. A Staples 3D printing center. Photo courtesy of Staples.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Legal challenges and opportunities for 3D printingBitcoin: why digital currency is the future financial systemIs the 3D printing market a hype, a hope, or a threat?

Read More...
posted 12 days ago on gigaom
Like most companies, Twitter is happy to put out numbers that make the service look as popular as possible, like the 240 million or so figure it uses for “active” users, defined as anyone who logs in at least once a month. But it rarely talks about what many see as the most important number, namely the number who actually tweet — which is probably why estimates like the most recent one from Twopcharts, as quoted in the Wall Street Journal, has gotten a lot of attention: it says 44 percent of accounts have never posted a single tweet. As many people have pointed out — including Twopcharts itself — this kind of data is problematic at best, in part because it is based on fuzzy estimates rather than data that comes directly from Twitter. It’s also difficult to figure out how many of the almost 1 billion accounts that Twopcharts says have been created since Twitter began were created by users who signed up again under another name. That said, however, the idea that Twitter has a billion or so accounts, but only 200 million of those users even sign in once a month (let alone post a tweet) and almost half have never posted a single status update seems somewhat troubling. But should it be? And if it is, what should Twitter do about it? .@rsarver @jasondfox also 99.99% (how many nines?) of TV viewers have never made a TV show.— Josh Elman (@joshelman) April 11, 2014 Is Twitter still too hard to use? We don’t expect everyone who reads blogs to have one, nor do we expect everyone who reads a book to have written one — but Twitter has always seemed different, in part because it is so easy to post a tweet. And yet, for anyone who follows the science of social networks, it’s not surprising that Twitter would fit the 90-9-1 ratio, in which the vast majority of users simply consume. There’s at least some evidence that Twitter is concerned about this number, because senior executives of the company have talked a number of times about moving the “scaffolding” of Twitter into the background somehow — by which it means the machinery that can often be confusing for new users, like the @ symbol or the hashtag or the retweet, or the fact that you sometimes have to use a period in front of your tweet so that everyone will see it. The number Twitter seems the most concerned about, however, is the overall user number — the one that caused some mild panic among shareholders and investors when Twitter admitted in its first-ever earnings conference call that it was flattening. Getting that figure — and the active-user figure — to grow is the reason why Twitter has been adding features and redesigns like a mad thing recently. Everything from experiments like @MagicRecs (which doesn’t seem to have performed very well) to the addition of new Facebook-style profile designs seem intended to make the service more appealing for new users. But there is still much work to be done, if the comments on a recent WSJ piece are any indication: Twitter needs to broaden its reach Twitter is also clearly concerned — as it should be — about the difficulty of finding new people or accounts to follow, and sorting through the massive amount of content that comes from half a billion tweets a day. That seems to be the rationale behind the company’s acquisition of Cover, a small startup that was working on an adaptive home-screen for Android devices, one which changed what it showed users based on their environment, time of day, etc. Just as Google is trying to do with Google Now, Twitter needs to get better at surfacing content automatically, without waiting for users to click and say that they are interested in a specific tag or keyword. The service’s “Discover” tab is relentlessly pathetic at this, despite the time and resources that Twitter has devoted to it — which could explain why two of the main designers responsible for that feature recently left the company. In the end, the number of people who actually post a tweet is always going to be a relatively small fraction of the overall user figure. That’s not to say Twitter shouldn’t be concerned about non-tweeters, but it has much larger fish to fry. It needs to figure out why some people don’t use Twitter at all — why they sign up and then never return. What can it do to convince them to stay? It’s not clear that imitating Facebook is going to work, but it has to do something. Post and photo thumbnails courtesy of Shutterstock / Tim StirlingRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How mobile will disrupt the living room in 2014Important notes for IT decision-makers from the fourth-quarter 2013What new identity management solutions can offer today’s enterprise

Read More...
posted 12 days ago on gigaom
In this week’s bitcoin review, we recap how Chinese regulation rumors are causing the price to fall. Is anyone to blame for the price downslide? Rumors continue to swirl that China is starting to crack down on bitcoin trading by freezing some bitcoin exchange’s bank accounts. In an announcement posted on its site yesterday, BTC Trade said that it had received notice from its bank that its account would be frozen on April 15 if it does not stop using it to conduct bitcoin-related business. Chinese exchanges Huobi and BTC100 also posted notices that they had received similar calls about their bank accounts. At the same time of the announcements, the price of bitcoin took a huge tumble, falling nearly 18 percent in one day. The market then rebounded when the governor of the People’s Bank of China said during an economic conference that it was out of the question for the bank to ban bitcoin, because they didn’t create it. Instead, according to the reports, he viewed it as more of an asset or a collectible, like stamps. While that did help fix some uncertainty in the market, it wouldn’t be out of the question for the price to see a couple more free falls if more Chinese exchanges are faced with the threat of frozen bank accounts. The market this week In a scary moment for bitcoin holders, the price dipped below $400 on Thursday and fell 18 percent to close at $360.84. It has since made a major rebound and is up 17 percent to $425 as of 10:45 a.m. PST. For background on why we’re using Coindesk’s Bitcoin Price Index, see note at bottom of the post.  In other news we covered this week: The MtGox drama continues as its CEO, Mark Karpeles, is likely to face arrest in the U.S. from its legal problems should he set foot on U.S. soil. Circle’s CEO thinks the future of bitcoin will be determined by central banks, standards bodies and corporate contributors — not quite the decentralized system of the early dreamers of bitcoin. Bitcoin continues its consumer-friendly approach after Cryptex announces a bitcoin-to-cash debit and ATM card. Here are some of the best reads from around the web this week: Ezra Klein’s new Vox Media got into the bitcoin game right away. It published its first piece on why bitcoin is a bad currency that will change the world along with 19 “cards” that explain in laymen’s terms what the currency is. A New York Times reporter wrote about his bitcoin befuddlement and his process in trying to understand it: “The first thing I found out? This is the closest thing in finance to riding an angry bull at the rodeo.” The largest bitcoin “mine” in North America looks more like a greenhouse than a traditional mine and its on the outskirts of a small town in central Washington. Bitcoin also goes to Washington — D.C., this time. Robocoin brought an ATM to Capitol Hill then taught congressmen how to buy cryptocurrency. Bitcoin is also headed to the classroom. NYU announced it will offer a class this fall on the legal and financial issues around the crpytocurrency world — that is if it still exists in the fall. Bitcoin in 2014 The history of bitcoin’s price A note on our data: We use CoinDesk’s Bitcoin Price Index to obtain both a historical and current reflection of the Bitcoin market. The BPI is an average of the three Bitcoin exchanges which meet their criteria: Bitstamp, BTC-e and Bitfinex. To see the criteria for inclusion or for price updates by the minute, visit CoinDesk. Since the market never closes, the “closing price” as noted in the graphics is based on end of day Greenwich Mean Time (GMT) or British Summer Time (BST).  Featured image from Flickr/BTC keychainRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Bitcoin: why digital currency is the future financial system

Read More...