posted 6 days ago on gigaom
The Verge published an indepth expose on eccentric energy and computing entrepreneur Mike Cheiky and how he’s been able to convince Valley venture capitalists to invest in his companies, despite some early questionable scientific claims. Cool Planet announced last month that it raised another $100 million (from folks like Google Ventures, BP, UBS, Goldman Sachs and others), and to which my response on Twitter was “I thought this was an April Fools joke, but looks like not.” The companies that Cheiky has founded have distanced themselves from him, and some have changed directions and business models. But the energy industry is particularly susceptible to what I once called “snake oil energy salesmen and green bamboozlers.” Story posted at: theverge.com To leave a comment or share, visit: An expose on the founder of Cool Planet, Transonic & ZPower

Read More...
posted 6 days ago on gigaom
Nexus 5 handsets on Sprint’s network have a software update available. According to Sprint’s own support page for the phone, a new Android build dubbed KTU48F is coming to phones starting Monday. Android Central found the software information, noting that this is likely Android 4.4.3 and will be made available for Nexus phones on various carriers. Sprint’s Nexus 5 is a bit unique, however, due to the carrier’s Project Spark which dynamically chooses which LTE band to use for mobile broadband activities. Video downloads, for example, may be routed over a faster channel while basic email and other notifications are sent from a slower one; in this case, the user wouldn’t likely see a speed difference since so little data is actually being transmitted. Sprint says the new Android software will add support for Project Spark’s band 26 and band 41, along with miscellaneous Android updates. Android Police created a list of expected Android 4.4.3 tweaks at the end of March suggesting that the software will address small fixes for radio, data and camera focus issues. Bigger changes are likely coming at June’s Google I/O developer event although it’s possible Google repeats history from 2013. Last year Android was massively improved without a major Android release; instead, Google updated core common services and APIs to bring the big changes.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Takeaways from mobile’s second quarterIn Q3, the Tablet and 4G Were the Big StoriesIs Android broken and if so, will Google fix it?

Read More...
posted 6 days ago on gigaom
Google may soon give greater prominence in its search results to websites that use encryption, a move that would indirectly make it more difficult for hackers or governments to track what people do on the internet. According to the Wall Street Journal, Google executive Matt Cutts suggested at a recent conference that the search giant is considering an algorithmic boost for websites that encrypt data. Web developers consider Cutts’s public statements to be significant because they telegraph forthcoming changes to the all-important Google rankings, although the story also suggests that Google will not making any changes in favor of encryption anytime soon. Cutts’s proposal comes after  tech companies like Google and Yahoo have moved to encrypt more data in response to controversy over NSA spying revelations. Encryption means that it’s much harder for outside parties to “listen in” as data travels between company website and user computers, but — as the ongoing scare over the Heartbleed bug shows — it’s not perfect. Any Google decision to emphasize encryption in search results would ripple widely because so many developers design websites in accordance with Google’s best practices.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.The rise of M2M security challengesHow to utilize cloud computing, big data, and crowdsourcing for an agile enterpriseBitcoin: why digital currency is the future financial system

Read More...
posted 6 days ago on gigaom
As I watched Microsoft’s Joe Belfiore use Windows Phone 8.1 on stage earlier this month, I kept thinking that in many ways, Microsoft has finally caught up to its competitors in the smartphone software business. After using Windows Phone 8.1 for the past week, the possibility that Belfiore showed off has become reality: Windows Phone 8.1 is a superb update to Microsoft’s smartphone operating system. Cortana is my new best friend Arguably the biggest new feature is Cortana, a personal digital assistant of sorts. I’m a heavy user of the similar Google Now feature on Android phones, and I’ve been relying on Cortana to provide me with information just as often as I do with Google Now. The good news is that in some ways, Cortana is even better. The so-so news is that sometimes, her performance is just that: so-so. Cortana is still a beta service, however, and I’ve given Google a pass before on beta services; it’s only fair to give Microsoft one too. In general, Cortana works as advertised. Simply tap a button and ask a question or state a command out loud. Cortana’s voice recognition worked quite well in my testing, only garbling a word or two here occasionally. She uses Bing for searches and when she can, Cortana will speak back answers to your question. When she can’t, expect Bing results on screen. What I like here is how Microsoft has blended the best of Google Now and Apple’s Siri software. Google Now surfaces contextual information, but has no personality. Siri has plenty of personality but is feature-limited and not really open to a wide range of third-party apps or services. Cortana acts as a contextually aware assistant that does have some personality — a mix that is what you’d expect from a modern digital assistant these days. I also appreciate the approach Microsoft has taken with personal data. Cortana keeps a notebook of your interests and data so you know what the software is considering when answering your queries and commands. You can personalize that notebook by adding or removing information. And it’s up to you if you want Cortana to scan your email to add more information to your local notebook. You manage your own information instead of giving carte blanche to scour emails at the server level, which is Google’s approach. I think many consumers and enterprises will appreciate this and give Cortana a try because of it. One thing I don’t like is how many button presses it can take to get Cortana to work. Admittedly, I’m spoiled by the always listening functions found in certain Android phones: I can simply say “OK Google Now,” for example, and get information without even touching the phone. Cortana is an app — one that I placed on my Start screen at first — so with that setup I have to tap the app and then tap the microphone button before speaking. There’s a simpler way to wake and use Cortana though: tap and hold the dedicated Search button to “wake” her and immediately speak your command. Visual updates and an easier way to get around Microsoft has long said that Windows Phone is the most personal handset software and that theme continues with version 8.1. Although I never minded the basic home or Start screen colors, you can now personalize them with your own photos. Yes, it’s one of those little things that competing phones have long been able to do, but many people will appreciate it. The new Action Center is a simple way to view or change important phone settings. This is just like Android’s notification shade and Command Center in iOS, which followed. It doesn’t really matter which company had the idea first, though: It’s super helpful. Just swipe down from the top of the display on any screen and you’ll have one-touch access to Airplane Mode, Wi-Fi, Bluetooth and screen rotation lock by default; you can pick which quick actions appear. If you want to see all of your phone’s settings, there’s an button option for that as well in the Action Center. This is also where your notifications reside — a welcome improvement, though it’s still a little lacking. You can interact with notifications on other platforms. In Windows Phone 8.1, you can only tap them to open the corresponding app, which is still an extra step. After using the Action Center — and Cortana, for that matter — I’ve found that these functions help offset some the challenges of Windows Phone app navigation. It’s faster to search, set a reminder or check local weather with Cortana. It’s quicker to use the Action Center than to find the Settings tile on your phone or hold the Back key down to see open apps and find Settings. Microsoft is bringing more efficiency to navigating Windows Phone without modifying the multitasking approach it already had in place. Odds and ends and the big app question While this is a “point” update, meaning Windows Phone 8.1 should only incrementally improve upon Windows Phone 8.0, there are tons of small goodies in here. The new Word Flow keyboard lets you quickly swipe through letters to type words. I was already enamored of the stock keyboard — I find it’s among the best available — but Word Flow makes it even better. WiFi Sense can log you in to public hotspots with ease or be used with private networks to share Wi-Fi with friends in a controlled manner. Battery Sense shows which apps are slurping power. VPN support has been added, although I didn’t test that particular function. Traditional voice calls can be turned into Skype video calls with a button press. Internet Explorer 11 has a great new Reading Mode to show pure content. And the list goes on. Make no mistake: All of these are welcome features. And in many ways Microsoft has brought Windows Phone 8.1 on par with competing mobile platforms. In some very specific ways, it may even exceed them. So Windows Phone 8.1 will bring worldwide domination for Microsoft in the smartphone market, right? Not so fast. For all of the super new and feature-rich improvements found here, there’s still the question of developer support and third-party applications. Let’s face it: The basic features of a phone are “table stakes” in this game and beyond that are the apps that people want to use. I think the future here is a bit brighter than it was, mainly because of Microsoft’s Universal Windows Apps strategy that Windows Phone 8.1 supports. Essentially, Microsoft has made it easier for programmers to make an app that works on phones, tablets and computers powered by Windows. That brings huge potential for more great apps on Windows Phone devices but for now, it’s simply that: potential. Photo by Kevin Tofel/Gigaom It’s going to take time for developers to take advantage of the new universal app approach. Until then, I think people will be very impressed by what Microsoft has brought to the table with Windows Phone 8.1. There’s much to like here and after a few years of trying to close the gap with its competitors, Microsoft has done just that in a considerable way with the new software.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Forecast: electric vehicle technology markets, 2012 -20177 things you should know about devopsIs Android broken and if so, will Google fix it?

Read More...
posted 6 days ago on gigaom
Seeing Beyond Technology: Advanced Management Program for Digital Leaders A Continuing Education Course at the University of Southern California Spring Session: May 5-9, USC Campus, Los Angeles Register here DISRUPTION. INNOVATION. CONVERGENCE. Technology is evolving at an unprecedented rate, transforming business models and the user experience. Hosted by USC’s Institute for Communication Technology Management (CTM), the Advanced Management Program (AMP) is focused on managing and leading in the age of mobile, digital, social, big data and the cloud. Participants typically represent the communications, technology and entertainment sectors.  Course topics include: The connected, digital consumer and the emerging competitive landscape Business strategy and innovation Millennials as customers and employees Driving positive change through executive storytelling (for the course brochure, see: http://classic.marshall.usc.edu/assets/162/26169.pdf) REGISTER NOW Enrollment in the Advanced Management Program is limited in order to provide maximum opportunity for interaction and teamwork. To register for the course now, go here. DISCOUNTED ADMISSION FOR GIGAOM SUBSCRIBERS Gigaom subscribers can receive a discount when they pay for the course. Simply enter the code GIGAOM when you register. Your fee will be reduced from $8,400 to $6,000. About us Founded in 1985 at the University of Southern California, CTM is the world‘s foremost institute at the intersection of technology and content. It unites a powerful network of industry leaders involved in every facet of the digital media value chain. For more on CTM, go to www.marshall.usc.edu/ctm.

Read More...
posted 6 days ago on gigaom
Looks like bug trackers may be the next price war battle field. Axosoft just cut the price of its bug tracking software, which had listed for $70 per year,  to $1 per year for the entire organization. Not a ton of margin there, but it’s an eyebrow-raising ploy for the Scottsdale, Ariz.-based company. Axosoft’s bug tracker competes with Atlassian’s Jira, which starts at $10 for 10 users per month but other contenders include Fog Creek Software’s  FogBugz  ($25 per user per month); JetBrains’ YouTrack ($1,500 per year for a 50-person team); Pivotal Tracker ($2,000 per year for 50-person team); and BugHerd (about $2100 per year for a team of 50.) If you do the math, the latter three products are pretty darned inexpensive per user but still require a relatively big outlay for the whole team. One Jira user was intrigued but not sold.  “I’m not sure the world needs another ticketing system,” said Michael Cizmar, president of MC+A, a Chicago-based development shop specializing in portals and search. “Jira [cost] does start to ramp up after 10 people but do costs matter that much? Sounds like this product would compete more against open-source than Jira. If it’s got feature parity you might consider it, but then again, you want the vendor to make some money so it can support the product,” he said. Axosoft does offer other products for scrum developers. (Scrum is an agile software development methodology that enables developers to quickly prioritize,  incorporate and test new features in a product.) Of course, offering one key tool (say a bug tracker) for rock-bottom price is a way to get developers to look at the entire portfolio which includes OnTime management software for scrum teams in both on-premises and hosted versions. with the latter starting at $25 per user per month. Oh and if you’re not clear on scrum development, the company also launched ScrumHub.com, an educational site about the technique.     Feature photo courtesy of  Flickr user PundaGRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Understanding project management in the new era of work

Read More...
posted 6 days ago on gigaom
After seeing Windows Phone 8.1 shown off earlier this month, developers can now get their hands on the software. Engadget saw that Microsoft released the update on Monday for any Windows Phone 8 device. While the software is meant for those who make Windows Phone applications, anyone can technically download and install it either by registering as developer for $19, register for free using Microsoft’s App Studio software or follow a process to developer unlock their handset.  The new software includes Cortana, Microsoft’s voice-centric assistant, personalization tools and support for Universal Windows Apps to name a few functions.

Read More...
posted 6 days ago on gigaom
Antenna maker Mohu is working on releasing its Channels TV adapter this summer after successfully completing a Kickstarter campaign that not only helped the company to raise close to $145,000, but also provided some input on the direction of the product. “The majority of the stretch goal feature ideas were suggestions from backers – the product actually evolved during the duration of the Kickstarter campaign as a direct result of backer requests,” said Mohu spokeswoman Jenni Swamp. Mohu Channels is a TV adapter that combines free over-the-air broadcast programming with online video services like Netflix, YouTube and Hulu Plus. The device, which is based on Android, will offer users a cable-box-like programming guide for live TV as well as the option to access additional video sources through an integrated web browser. Check out a first look at a Channels prototype below: Some of the stretch goals the company added during its Kickstarter campaign include the ability to play back local content as well basic time-shifting functionality that will allow viewers to pause and then fast-forward through live programming, which should come in handy to skip ad breaks. The company will also develop an Android app that can be used as a remote control for the device. Channels is competing with a number of other products, including streaming devices like Roku, Chromecast and Amazon’s new Fire TV, as well as over-the-air-centric products like Simple.tv and Tablo — but Mohu is coming at it from a unique angle. The company’s founders originally launched Greenwave Scientific, a defense contractor that developed compact radio antennas for armored vehicles. Greenwave Scientific offspin Mohu took the same technology and repurposed it to build antennas for the reception of HD TV, building flat antennas that don’t look at all like your Grandma’s rabbit-ear antenna. This week, Mohu introduced a new compact model dubbed the Leaf Metro.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Over-the-top video in 2012: trends and technologies to watchTV disruption: It’s not about the cordHow Content Bundles Could Make Cord Cutting Mainstream

Read More...
posted 6 days ago on gigaom
Social TV startup Zeebox is rebranding as Beamly, and focusing more on interactions that happen when a show isn’t airing — a departure from the focus on live second-screen activities that much of the social TV industry long focused on. Then again, not much is left from that industry: When Zeebox started in 2011, it entered a crowded market of dozens of social TV startups. Fast forward three years, and most of them have given up, or been acquired amid a wave of consolidation and growing doubts about some of the key ideas of social TV. Zeebox is still standing, in part due to well-filled coffers, thanks to backers like Comcast and BSkyB. Jason Forbes, the company’s U.S. EVP, joked during an interview that one reason for the rebrand was confusion around the Zeebox brand. “People thought we were a German competitor to Xbox,” he said. But while Forbes touted Zeebox’s two million monthly active users as well as its young and female audience, he also had to admit that some of its early ideas simply weren’t working. “There has been a kind of reality check,” he told me. Companion experiences like quiz shows and trivia in particular just didn’t work for highly scripted shows, he said. When people watch Mad Men, they just don’t want to be bothered by an app on their phone or tablet. That’s why Beamly now aggregates and generates more content that can be consumed before or after an episode is consumed, thanks to three editorial teams in Australia, the U.K. and the U.S. The company also teamed up with online celebrities to populate its show rooms with content. “Every single show is unique,” Forbes said, which is why each and every show required personal attention from the Beamly team. I’s worth noting that Beamly’s few remaining competitors seem to move in the same direction. Viggle’s Wetpaint also aims to curate content for TV audiences, and TVTag, which recently absorbed GetGlue, uses a lot of human curation to make TV sharable as well. All of that is costly and potentially hard to scale. Forbes told me that he views Beamly as social TV 2.0, but one has to wonder how soon we are going to see the next wave of consolidation in this space.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Social-TV apps and consumer behaviorHow mobile will disrupt the living room in 2014Social-TV apps: understanding consumer behavior and the evolving ecosystem

Read More...
posted 7 days ago on gigaom
 Social media companies are in a bind when presenting on mobile: While Facebook, Twitter and their assorted peers have structured their desktop experience to serve as an all-in-one hub for messaging, media sharing and other forms of communication, that experience looks cluttered and difficult to navigate on a phone. So companies constantly strive for a pared-down, simple experience that keeps users engaged and not confused. This week, TechCrunch reported that in order to streamline the user experience on the Facebook app, the company will phase out the app’s current messaging capabilities. If users are interested in communicating with their friends in real-time through Facebook on mobile, they will have to use the company’s separate Messenger app to do so. The changes have apparently started already in Europe, and will be rolled out over time elsewhere. Facebook’s decision to remove messaging from its main app is bound to cause initial pain for users who are resistant to change (and reluctant to toggle between two apps to engage with a single platform), but it’s the smartest, safest choice Facebook can make to keep its service thriving on mobile. Dropping the weight  Facebook’s mobile app as it stands now is a hulking, battery-sucking behemoth. It tracks location, constantly fetches data and keeps a whole host of features constantly running to push new information — and that’s before you factor in background tasks that keep the app going even when not in use. By cutting the messaging aspect of the app, Facebook can make its main app smaller and more usage-friendly. This is a big deal for a few reasons (like battery sustainability), but the primary one is that a lighter app takes up less space, ideal for emerging markets where feature phones and smaller smartphones dominate. In order to keep growing, Facebook must capitalize on the emerging market — and a stripped-down app can help make that a reality. Expanding features By pushing Messenger into its own app, Facebook now has the space not only to incorporate new features into its main app, but also to add richer features to Messenger. The Messenger app already offers more than Facebook’s main mobile platform, integrating with a user’s contacts to text people who aren’t even on Facebook. Within the app, users have a smoother interface to create group chats, mute conversations and even make a phone call over the app. Meanwhile, Facebook can make its main app a better place to post and interact on the News Feed and the Timeline, rather than just acting as a hub. I’d particularly like to see an optimized Groups section and simpler Timeline browsing. By kicking out one of its main tools, Facebook can focus on optimizing other aspects of its mobile experience, and the app can do better without all that bloat. Better cross-breeding over time Of course, the end goal could be tighter integration between Facebook and its $19 billion acquisition, WhatsApp. While Facebook may never entirely absorb WhatsApp — particularly given the latter company’s worldwide user base— the standalone Messenger app is a great testing ground for Facebook to incorporate some of WhatsApp’s DNA. Features like video messaging, contact exchanging and location sharing are great parts of the WhatsApp experience that would also be at home on Facebook’s Messenger, and could take advantage of Facebook’s video and Maps content to enrich it even further. Furthermore, if Facebook does decide to finally merge WhatsApp with Messenger in the long run, it’s imperative that the company does its best to pluck the best aspects of its acquisition early on to appease loyal WhatsApp users. By spinning out Messenger, Facebook is future-proofing its mobile game and setting the stage for better mixing with its other products. What we as users lose in the convenience of a single app, we gain in a smarter, more tailored experience.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How to utilize cloud computing, big data, and crowdsourcing for an agile enterpriseApplying lean startup theory in large enterprisesHow mobile will disrupt the living room in 2014

Read More...
posted 7 days ago on gigaom
In the span of just a few years, open source has produced businesses that are incredibly attractive to the investment community. In 2012, open source venture investment jumped 80 percent over the prior year with $553 million invested, compared to $307 million in 2011. VCs have flocked to darlings like MongoDB, Open Stack, Cloudera, Puppet Labs and Hortonworks because these companies are solving incredibly difficult challenges in the cloud and big data arena faster than any proprietary software vendor could. So why the big increase in interest now? Open source software has been around for years, in many cases implemented on the fringes by developers who prefer the freedom and flexibility of contributing to the evolution of the platforms with which they choose to work. There were even early glimmers of promise; for example, Linux proved to be a fast, effective server platform for many businesses before it grew to be one of the largest open source communities and the third-largest web client operating system in the world. But today, open source has crossed over from a niche techie outlier to a driving force for businesses. A few major factors have made open source more appealing to the business community and driven an open source renaissance: Innovation and collaboration: Open source technology empowers developers to contribute to projects based on their interests, attracting top talent from all corners of the globe. These international contributions make open source technologies and platforms develop and debug faster, increasing the pace of innovation far faster than proprietary counterparts. Furthermore, developers now are more empowered to make technology purchases within their companies, which means that open source is brought to the decision table on a more regular basis. Agility and time to market: The traditional roadmap approach has historically forced an organization to move at the lethargic pace decided by a vendor rather than at the speed and on the path uniquely required by that organization. Technological agility is enabled in a variety of ways with open source capabilities and, perhaps most importantly, helps organizations eliminate the proprietary vendor’s “roadmap.” Overcoming the security hurdle: The big question around open source software continues to be: “Is it secure?” Over time, open source software projects have proven more secure than their proprietary counterparts because of the sheer number of developers constantly updating software worldwide. Think of it as a 24/7 security monitoring capability no single company would be able to sustain. Once businesses began to recognize security as a differentiator for open source rather than a challenge, it introduced a radical change in acceptance levels within the corporate world. The digital revolution and resulting influx of big data: Perhaps the largest driver in the open source renaissance is the single market force that changed the way nearly every company — on a global scale — does business. Digital is disrupting nearly every facet of business, and this, paired with the rise of Internet-connected-nearly-everything, has brought about a massive influx of information. Open source has become the only way to cope with the nearly infinite data points, preferences and inputs that come from each technology user. Big data, personalization, and unprecedented interconnectedness leave open source as the best potential technology to manage and operate within the era of digital revolution. Disrupting old business models: digital revolution style As the digital revolution marches on, traditional ways of doing business are getting completely turned upside down. A recent survey from Black Duck and North Bridge Venture Partners showed executives are becoming more willing to work with open source communities to influence projects, which in many cases takes a leadership role to drive change from the inside out. In fact, 61 percent of respondents said they see this type of open source innovation as leading the technology industry forward. This, in many cases, has already begun happening in the big data world. For example, companies like Jaspersoft have made open business intelligence — or the quest for analyzing and interpreting unlimited inputs — an easy-to-execute reality for businesses. MongoDB (the company behind the popular NoSQL database of the same name) is building the infrastructure necessary to not only scale operations, but also keep pace with constantly changing data needs, regardless of where this data may be housed. And we can’t think about big data without thinking of Apache Hadoop. Named after the stuffed animal of the creator’s daughter, the Hadoop framework allows for the distributed processing of large data sets across clusters of computers. Hadoop is designed to scale up from single servers to thousands of machines, providing the computing power to handle data in enormous volumes. There are new Hadoop-based growth companies like Cloudera, Hortonworks, and MapR, as well. The motivations for a mass-market shift to open source are many, but at its core, every business decision boils down to agility. And the power of open source is limitless when organizations tap into the commitment and collaboration of executive teams and developers around the world. While the challenge of keeping up with fast-paced markets will never go away, open source technology will close the gap — making businesses better, faster and smarter than they’d ever imagined. Tom Erickson is CEO of Acquia. Follow him on Twitter: @tom_eric.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How OpenStack can become an enterprise reality.The operator’s view of SDN, NFV, and open sourceWhy APIs, security features, and more matter when picking multicloud solutions

Read More...
posted 7 days ago on gigaom
In my quest to make 2014 the Year of the iPad, a professional photo editing program that interfaces with my Lightroom-based workflow was a big gap. This week Adobe released Lightroom Mobile (Free, but subscription required) and I took a look how it could help my photo workflow. Lightroom Mobile allows you to perform basic editing and photo culling features. It can also sync with your Adobe Lightroom 5.4 desktop client. There is, however, a huge gotcha for that. Pricing The biggest thing that annoys me about Lightroom Mobile is the pricing. It requires either a Creative Cloud license or at the minimum a Photoshop Photography Program license. Those run from $9.99 to $600. That’s a lot. Unlike Office for iPad, the app simply will not work without a subscription. While Office at least gives you the option to read files without an Office365 subscription, Adobe Lightroom Mobile greets you with a login screen when you launch the app. I also have a standalone Lightroom 5 license, but without a Cloud license I can’t sync my photos to Lightroom Mobile. Given the limited feature set of the mobile app, I think this is a huge miss for Adobe. What the app can and can’t do The biggest draw to Lightroom Mobile is that it can handle RAW files in a non-destructive manner. It can also sync with my collections on Lightroom 5.4. It has a small amount of presets and cropping tools you can use to adjust photos with, but they are pretty standard and about as good as most existing photo apps available. What I did like is that you can adjust the white balance either via presets, or picking a reference point on the photo. You can also adjust the contrast, brightness, highlights, shadows, whites, blacks, clarity, vibrance and saturation. You can also undo all edits to a photo. What it can’t do is the advanced editing you use Lightroom Desktop to do. You cannot have custom presets, adjust curves, sharpening, noise reduction, lens correction and the like. It’s also not a professional-level tool. For starters, your iPad display is not calibrated. In my case, being color blind and shooting down to black-and-white most of the time, this is not a problem for me. Hopefully, Adobe will add more features soon. Right now, the feature set is just too limited to justify a $10/month subscription. Syncing with Lightroom 5.4 Setting up syncing with Lightroom 5.4 is pretty easy. You go to the collection you want to share and check off a box next to the name. From there, Lightroom syncs down a Smart Preview of the photo. Smart Preview files are a new lightweight, smaller, file format based on the lossy DNG file format introduced in Lightroom 4. They also let you edit files not directly attached to your Mac. I use them to edit photos on the go when I’m not attached to my main drive at home. On the iPad, this helps keep the file size to a manageable level. You can also create collections on Lightroom Mobile and sync those back to the desktop version as well. You can import photos from your iPad’s camera roll, but not your PhotoStream. It’s also important to note that your photos are not synced through Adobe’s cloud services. So you can’t bring your iPad to a shoot, create a collection and have the photos already on your desktop when you get back to your desk. How it will integrate with my workflow My photo workflow is pretty basic. I import my photos from my camera’s SD card to Lightroom. I then go through the photos and pick or reject my photos. From there I do the needful on the photos via a collection of custom presets. Lightroom Mobile can certainly help with the culling process. I find using the iPad to go through photos a very relaxing part of the process. You can import your photos during a shoot and then view them with model to see what ones he or she likes. This saves a ton of time and helps eliminates the need to book other sessions for a reshoot. Other than that, I don’t see me doing any heavy photo editing on my iPad. I might see how a photo will look in B&W, but all my post-processing will still be done in Lightroom 5.4. Is it worth the subscription? If you do not have a Photoshop Photography Program subscription already, I see little reason to subscribe just to get Lightroom Mobile. Unlike Office365, where all apps can access files stored on your OneDrive, Lightroom Mobile does not access your Creative Cloud storage. If it did, and I had the ability to sync down a collection at will, that might make the subscription palatable. As it is now, the app should just be free since it’s more of a companion app to Lightroom 5.4.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How to manage the customer experience through mobile appsSurvey: How apps can solve photo managementThe state of the photo and video app market

Read More...
posted 7 days ago on gigaom
The OpenStack IceHouse release due this week promises more business-friendly enhancements including at least some support for rolling upgrades from the previous (Havana) release. As Red Hat product manager Steve Gordon wrote in a blog post last month: “The Compute services now allow for a level of rolling upgrade, whereby control services can be upgraded to Icehouse while they continue to interact with compute services running code from the Havana release. This allows for a more gradual approach to upgrading an OpenStack cloud, or logical designated subset thereof, than has typically been possible in the past.” If this works as promised — although unclear what “a level of rolling upgrade” means, it could be a big advance for OpenStack which has been dinged for the difficulty of upgrades – which required complete system shut down — something no IT person wants to even consider. Gigaom Research analyst Paul Miller has more on architecting OpenStack for the enterprise.  Expect more OpenStack news to emerge in the run up to the OpenStack Summit in Atlanta next month but rivals aren’t standing still: Apache CloudStack recently announced its 4.3 release with Hyper-V support. And Eucalyptus continues to push its Amazon-compatible private cloud infrastructure.  To hear more about how the private cloud market is shaping up, check out Structure in San Francisco where Chris Kemp, founder and chief strategy officer of Nebula; Marten Mickos CEO of Eucalyptus and Sameer Dholakia, group VP and GM for Citrix’ Cloud Platforms Group will reunite on stage to discuss private cloud choices. To get a taste of what’s in store from their panel, check out their appearance at Structure 2012. You won’t be sorry. Watch live streaming video from gigaomstructure at livestream.com Structure Show examines private cloud For more on how traction of various private and public cloud is shaping up, check out this week’s Structure Show in which RightScale VP Kim Weins takes us through the company’s latest State of The Cloud Report. Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How OpenStack can become an enterprise reality.The Structure 50: The Top 50 Cloud InnovatorsCloud computing 2013: how to navigate without a map

Read More...
posted 7 days ago on gigaom
About a year ago, I got fed up with my home Wi-Fi. No matter which router I bought, I simply couldn’t get reasonably good signal strength or consistently fast wireless speeds in certain rooms. Going back and wiring my home for data wasn’t an option, so I dropped $69 on the REC10 wireless range extender from Amped Wireless. It’s probably the best money I spent last year because it solved my wireless woes. Now the company has a newer model called the REC15A and I’ve been using it for the past several weeks. The new range extender costs $99 and I’ve found it’s worth the premium if you have a newer router like I do. It provides even faster wireless speeds; often coming close to the full home broadband speeds I can get with a wired connection. In fact, in some locations, I actually can get more than a 75 Mbps connection over Wi-Fi, the same as if I was connected directly to my home router with an Ethernet cable. Aside from the price, what’s different between the REC10 and REC15A? Three main things. 1. The older model supports the 802.11n 300 speed standard according to Amped Wireless. That means it should work well if you have an 802.11n router purchased in the last several years. The REC15A, however works with faster 802.11ac routers and I bought one of those, an Asus model, in 2011. And more mobile devices are now supporting the faster Wi-Fi: 802.11ac is supported in my Moto X, for example, as well as the latest flagship phones. 2. My router is dual-band, meaning it can broadcast using both the 2.4 and the 5 GHz frequency bands. The REC10 extender only works with the former frequency while the newer REC15A uses both simultaneously. That lets me run multiple networks across different channels; helpful because I dedicate one band solely for video content. Doing so keeps all of the other “chatty” devices and apps from affecting video content on the network. 3. Both extenders boost the signal and range of my home network but in this case, the older model does a slightly better job. The REC10 provides a 600 mW boost while the new REC15A outputs 500 mW. As a result, the range of the newer model is a little less by comparison. The difference is subtle but I can see it from time to time when checking actual signal strength in my home. I found, however, that it really hasn’t affected the speeds; I still routinely get better speeds when using the REC15A because of the dual-bands and faster 802.11ac wireless technology. The HTC One M8 supports 802.11ac making for fast Wi-Fi speeds all across my home with the REC15A installed. If you’re not getting the full speeds of your home broadband over Wi-Fi, I can definitely recommend both of the Amped Wireless range extenders. Which you should consider depends on your current router and how much range you’re looking for. With an older router, I’d suggest the REC10; or upgrade to an 802.11ac router and splurge on the REC15A. Already have an 802.11ac router? The answer is a no-brainer: the REC15A will be the better unit overall. Both are simple to set up: Just plug them into an outlet and configure the unit over a web connection. In under five minutes you’ll be able to experience fast in-home Wi-Fi in nooks and crannies you never could before. Now that my review unit is heading back, I’ll be ordering one of my own.  Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Inside the ultra-high-speed wireless home warsGigaom Research predictions for 2014How to balance cloud-based and edge-based mobile data with hybrid application design

Read More...
posted 7 days ago on gigaom
Remotely accessing a computer isn’t new and there are plenty of options to do so. One of the newest is coming from Google however: The company has been working on an Android version of its Chrome Remote Desktop app for nearly a year and a full release is likely imminent. A select few beta testers are using the software, which provides remote control of a Windows or Mac computer from an Android phone or tablet. We noted on this week’s Chrome Show podcast that the software will likely provide a better experience on a tablet, owing to its larger display. It’s not ideal to show a full computer screen on a small phone. The app appears to work similar to Google’s Chrome Remote Desktop extension which works with any computer that has the Chrome browser installed. Tune in below or download the full podcast episode here to hear our thoughts about Chrome Remote Desktop as well as news of the coming-soon Asus C200 Chromebook and the potential for Google’s Chromecast to become a daily dashboard for your television. Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Google TV: Overview and Strategic AnalysisHow to manage mobile security through productivityHow the consumer space battled licensing issues in the fourth-quarter 2013

Read More...
posted 7 days ago on gigaom
New Gigaom Research reports this week include Ben Kepes’ evaluation of macro technology trends and Aram Sinnreich’s continued research on 3D printing. Analyst Paul Miller also explores OpenStack deployment in unison with VMware virtualization. Note: Gigaom Research is a subscription-based research service offering in-depth, timely analysis of developing trends and technologies. Visit research.gigaom.com to learn more about it. Buyers Lens: How to utilize cloud computing, big data, and crowdsourcing for an agile enterprise by Ben Kepes This week, analyst Ben Kepes takes an 10,000-foot view of organizations today and the trends that threaten traditional business. Rather than viewing change as a threat, companies that embrace technology shifts will find opportunities to remain competitive, increase efficiency, and generate new business. In this report, Kepes highlights cloud computing, big data, and crowdsourcing as three key technologies every organization must consider, regardless of industry. Connected Consumer: Legal challenges and opportunities for 3D printing by Aram Sinnreich In this report, Aram Sinnreich reviews the undecided legal issues that have the greatest potential affect on creators, manufacturers, and other stakeholders involved in the 3D printing marketplace. Two of these major gray areas — patents and copyrights — have not been adequately addressed, leaving billions of revenue in question. For further analysis on 3D printing, be sure to check out his recommendations for companies impacted by additive manufacturing. Cloud: Architecting OpenStack for enterprise reality by Paul Miller The open-source cloud infrastructure project OpenStack has been top of mind for IT enterprise managers. Instead of throwing away existing investment in virtualization, this report proposes a hybrid approach and exemplifies integration between OpenStack-powered clouds and VMware virtualization. Analyst Paul Miller introduces OpenStack and then explores the benefits of implementing OpenStack alongside on-premise solutions. Featured image from Shutterstock/alphaspirit.  

Read More...
posted 8 days ago on gigaom
Compute and storage are essentially commodity services, which means that for cloud providers to compete, they have to show real differentiation. This is often achieved with supporting services like Amazon’s DynamoDB and Route 53, or Google’s BigQuery and Prediction API, which complement the core infrastructure offerings. Performance is also often singled out as a differentiator. Often one of the things that bites production usage, especially in inherently shared cloud environments, is the so-called “noisy neighbor” problem. This can be other guests stealing CPU time, increased network traffic and, particularly problematic for databases, i/o wait. In this post I’m going to focus on networking performance. This is very important for any serious application because it affects the ability to communicate and replicate data across instances, zones and regions. Responsive applications and disaster recovery, areas where up-to-date database replication is critical, require good, consistent performance. It’s been suggested that Google has a massive advantage when it comes to networking, due to all the dark fibre it has purchased. Amazon has some enhanced networking options that take advantage of special instance types with OS customizations, and Rackspace’s new Performance instance types also boast up to 10 Gbps networking. So let’s test this. Methodology I spun up the listed instances to test the networking performance between them. This was done using the iperf tool on Linux. One server acts as the client and the other as the server: Server: iperf -f m -s Client: iperf -f m -c hostname The OS was Ubuntu 12.04 (with all latest updates and kernel), except on Google Compute Engine, where it’s not available. There, I used the Debian Backports image. The client was run for three tests for each type – within zone, between zones and between regions – with the mean average taken as the value reported. Amazon networking performance t1.micro (1 CPU) c3.8xlarge (32 CPUs) us-east-1 zone-1a us-east-1 zone-1a 135 Mbits/sec 7013 Mbits/sec us-east-1 zone-1a us-east-1 zone-1d 101 Mbits/sec 3395 Mbits/sec us-east-1 zone-1a us-west-1 zone-1a 19 Mbits/sec 210 Mbits/sec Amazon’s larger instances, such as the c3.8xlarge tested here, support the enhanced 10 GB networking, however you must use the Amazon Linux AMI (or manually install the drivers) within a VPC. Because of the additional complexity of setting up a VPC, which isn’t necessary on any other provider, I didn’t test this, although it is now the default for new accounts. Even without that enhancement, the performance is very good, nearing the advertised 10 Gbits/sec. However, the consistency of the performance wasn’t so good. The speeds changed quite dramatically across the three test runs for all instance types, much more than with any other provider. You can use internal IPs within the same zone (free of charge) and across zones (incurs inter-zone transfer fees), but across regions, you have to go over the public internet using the public IPs, which incurs further networking charges. Google Compute Engine networking performance   f1-micro (shared CPU) n1-highmem-8 (8 CPUs) us-central-1a us-central-1a 692 Mbits/sec 2976 Mbits/sec us-central-1b us-central-1b 905 Mbits/sec 3042 Mbits/sec us-central-1a us-central-1b 531 Mbits/sec 2678 Mbits/sec us-central-1a europe-west-1a 140 Mbits/sec 154 Mbits/sec us-central-1b europe-west-1a 137 Mbits/sec 189 Mbits/sec Google doesn’t currently offer an Ubuntu image, so instead I used its backports-debian-7-wheezy-v20140318 image. For the f1-micro instance, I got very inconsistent iperf results for all zone tests. For example, within the same us-central-1a zone, the first run showed 991 Mbits/sec, but the next two showed 855 Mbits/sec and 232 Mbits/sec. Across regions between the US and Europe, the results were much more consistent, as were all the tests for the higher spec n1-highmem-8 server. This suggests the variability was because of the very low spec, shared CPU f1-micro instance type. I tested more zones here than on other providers because on April 2, Google announced a new networking infrastructure in us-central-1b and europe-west-1a which would later roll out to other zones. There was about a 1.3x improvement in throughput using this new networking and users should also see lower latency and CPU overhead, which are not tested here. Although 16 CPU instances are available, they’re only offered in limited preview with no SLA, so I tested on the fastest generally available instance type. Since networking is often CPU bound, there may be better performance available when Google releases its other instance types. Google allows you to use internal IPs globally — within zone, across zone and across regions (i.e., using internal, private transit instead of across the internet). This makes it much easier to deploy across zones and regions, and indeed Google’s Cloud platform was the easiest and quickest to work with in terms of the control panel, speed of spinning up new instances and being able to log in and run the tests in the fastest time. Rackspace networking performance 512 MB Standard (1 CPU) 120 GB Performance 2 (32 CPUs) Dallas (DFW) Dallas (DWF) 595 Mbits/sec 5539 Mbits/s Dallas (DFW) North Virginia (IAD) 30 Mbits/sec 534 Mbits/s Dallas (DFW) London (LON) 13 Mbits/sec 88 Mbits/s Rackspace does not offer the same kind of zone/region deployments as Amazon or Google so I wasn’t able to run any between-zone tests. Instead I picked the next closest data center. Rackspace offers an optional enhanced virtualization platform called PVHVM. This offers better i/o and networking performance and is available on all instance types, which is what I used for these tests. Similar to Amazon, you can use internal IPs within the same location at no extra cost but across regions you need to use the public IPs, which incur data charges. When trying to launch x2 120 GB Performance 2 servers at Rackspace, I hit our account quota (with no other servers on the account) and had to open a support ticket to request a quota increase, which took them about an hour and a half to approve. For some reason, launching servers in the London region also requires a separate account, and logging in and out of multiple control panels soon became annoying. Softlayer networking performance 1 CPU, 1 GB RAM, 100 Mbps 8 CPUs, 2 GB RAM, 1 Gbps Dallas 1 Dallas 1 105 Mbits/sec 911 Mbits/s Dallas 1 Dallas 5 105 Mbits/sec 921 Mbits/s Dallas 1 Amsterdam 29 Mbits/sec 61 Mbits/s Softlayer only allows you to deploy into multiple data centers at one location: Dallas. All other regions have a single facility. Softlayer also caps out at 1 Gbps on its public cloud instances, although its bare metal servers do have the option of dual 1 Gbps bonded network cards, allowing up to 2 Gbps. You choose the port speed when ordering or when upgrading an existing server. They also list 10Gbit/s networking as available for some bare metal servers. Similarly to Google, Softlayer’s maximum instance size is 16 cores, but it also offers private CPU options which give you a dedicated core versus sharing the cores with other users. This allows up to eight private cores, for a higher price. The biggest advantage Softlayer has over every other provider is completely free, private networking between all regions whereas all other provider charge for transfer out of zone. When you have VLAN spanning enabled, you can use the private network across regions, which gives you an entirely private network for your whole account. This makes it very easy to deploy redundant servers across regions and is something we use extensively for replicating MongoDB at Server Density, moving approx 500 Mbits/sec of internal traffic across the US between Softlayer’s Washington and San Jose data centers. Not having to worry about charges is a luxury only available with Softlayer. Who is fastest? Fastest (low spec) Fastest (high spec) Slowest (low spec) Slowest (high spec) Within zones Google Amazon Softlayer Softlayer Between zones Google Amazon Rackspace Softlayer Between regions Google Amazon Rackspace Softlayer Amazon’s high spec c3.8xlarge really gives the best performance across all tests, particularly within the same zone and region. It was able to push close to the advertised 10 GB throughput, but the high variability of results may indicate some inconsistency in the real-world performance. Yet for very low cost, Google’s low spec f1-micro instance type offers excellent networking performance: ten times faster than the terrible performance from the low spec Rackspace server. Softlayer and Rackspace were generally bad performers overall, but at least Rackspace gets some good inter-zone and inter-region performance and performed well for its higher instance spec. Softlayer is the loser overall here with low performance plus no network-optimized instance types. Only their bare metal servers have the ability to upgrade to 10 Gbits/sec network interfaces. Mbits/s per CPU? CPU allocation is also important. Rackspace and Amazon both offer 32 core instances, and we see good performance on those higher spec VMs as a result. Amazon was fastest for its highest spec machine type with Rackspace coming second. The different providers have different instance types, and so it’s difficult to do a direct comparison on the raw throughput figures. An alternative ranking method is to calculate how much throughput you get per CPU. We’ll use the high spec inter-zone figures and do a simple division of the throughput by the number of CPUs: Provider Throughput per CPU Google 380 Mbits/s Amazon 219 Mbits/s Rackspace 173 Mbits/s Softlayer 113 Mbits/s The best might not be the best value If you have no price concerns, then Amazon is clearly the fastest, but it’s not necessarily the best value for money. Google gets better Mbits/sec per CPU performance, and since you pay for more CPUs, it’s a better value. Google also offers the best performance on its lowest spec instance type, but it is quite variable due to the shared CPU. Rackspace was particularly poor when it came to inter-region transfer, and Softlayer isn’t helped by its lack of any kind of network-optimized instance types. Throughput isn’t the end of the story though. I didn’t look at latency or CPU overhead and these will have an impact on the real world performance. It’s no good getting great throughput if it requires 100 percent of your CPU time! Google and Softlayer both have an advantage when it comes to operational simplicity because their VLAN spanning-like features mean you have a single private network across zones and regions. You can utilize their private networking anywhere. Finally, pricing is important, and an oft-forgotten cost are the network transfer fees. This is free within zones for all providers, but only Softlayer has no fees for data transfer across zones and even across regions. This is a big saver. David Mytton is the founder and CEO of Server Density, a cloud management and server monitoring specialist. He can be contacted on david@serverdensity.com or followed on Twitter @davidmytton Featured image: Shutterstock/ssguyRelated research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.What defines the key players of the IaaS industryWhat developers should know when choosing an MBaaS solutionCloud and data second-quarter 2013: analysis and outlook

Read More...
posted 8 days ago on gigaom
The Netflix-Comcast truce has demonstrated once more how crucial video has become for today’s internet. YouTube alone streams enough footage each month to theoretically entertain every single human alive for four hours. Facebook users spend an average of 84 minutes a month watching clips on the social network, topping five billion views in January. The data inside each clip and metadata about each and every viewer’s interaction with a video can make or break marketing campaigns. But are companies making use of the vast treasure trove of data that all those streamed videos give them? So far, the answer is no. Using big data to boost one’s sales and marketing activities may sound like old news, but most companies today don’t use the full suite of modern business intelligence (BI) tools at their disposal. Some have embarked on implementing the open source Hadoop framework for data warehousing, including newer iterations such as Impala that make up for the lack in speed of the initial Hadoop versions. Some companies are trying new approaches to turn the entire web into a data repository, connecting sources across the cloud to each other and to their various on-premise datasets to run complex queries in a browser. And some are betting on new appliances that supposedly make mining your data as easy as a search query. But most companies still struggle to make sense of the basic requirements for all the different big data technologies out there — from budget to necessary staff skills. They also need internal buy-in to connect entirely new data sources to their sales, marketing and other activities to get that 360-degree view of their value chain and operations that software guys have been promising. That’s a pity because mining video data is a particularly valuable asset. The foray into the rich data sets of social media and video lets companies large and small literally see more and sell more. Photo from Thinkstock/Oleksiy Mark Take one European enterprise my firm works with. This company noticed that its sales of one product had shot up and almost drained inventory in a few days. But why? When the sales team talked to the social media guys, they found out that a video about the new product had been viewed more than 100,000 times the day before the spike occurred. The firm used a team of two to pull together data across the web and inside its firewall: online orders and conversion rates, data from its YouTube and Vimeo accounts, plus Google Analytics and Facebook Insights. Turns out the bestseller story was a bit more complicated. YouTube views had indeed shot up, but they had only led to a 15 percent increase in orders. What really drove the unusually high sales was something else: the moment when die-hard fans started spreading the word. They shared the clip everywhere from Tumblr to Facebook and got their friends to watch it on mobile devices. Viral plus handheld generated a 40 percent sales increase, but that tidbit only showed up once all the dots were connected. The company went a step further and pushed this analysis to its salesforce users. It was less a pep talk than advanced prep work for the next launch. “This intel convinced us  to syndicate the same content on different channels, but to properly engage each type of audience, whether we’re talking to impulsive, Twitter-happy buyers, careful researchers on Quora, or collector types on Pinterest,” the marketing head told me. “For the next launch, we decided to focus on a mobile promotion that generated similar sales.” Mining video data is the next big thing in harnessing big data. It simply is too big a data pool to ignore it. YouTube alone has more than a billion unique viewers each month, 80 percent of them from outside the U.S. The number of subscriptions has tripled since last year, and 40 percent of all content is viewed on mobile devices. This is why the POV should meet the POS. Only when you mash up all these pieces of information, and do so as quickly as possible, do you stand a chance to establish cause and effect. It might not sound as sexy as “big data,” but mining video clips brings enterprises one step closer to understanding marketing success — and how to repeat it. Even better, there are tools out there that do not require nerds. It would be wrong to declare one data source is suddenly more important than all the others, but companies need to put the spotlight on video and marry those insights with bone-dry sales and marketing numbers. Rachel Delacour is CEO and co-founder of cloud business analytics pioneer, BIME Analytics, who also holds an advisory role on cloud computing standards with EuroCloud.  Follow her on Twitter @bimeanalytics. Feature image from Shutterstock/photosani

Read More...
posted 8 days ago on gigaom
My seven years on the Internet Engineering Task Force (IETF), from 2003 to 2010, definitely taught me interesting things, including how to get a group of people to deliver when you had no control over their jobs. As co-chair of the Network-based Mobility Management (NETLMM) working group, I led one of the rather contentious working groups at the IETF. We managed to deliver relevant standards and actually brought closure to the working group so we could move on. Overall, my experience with IETF has positively contributed to my skills in leadership, consensus building, design thoroughness and seeing the big picture. It also gave me the opportunity to interact with incredibly talented people from diverse organizations and to really understand how the Internet came to be what it is today. And yet, several years ago, when I was nominated for the Internet Architecture Board, I decided it was not for me. Not long after, I took an indefinite leave of absence from the IETF and have not returned since. There are times I feel guilty about not giving as much to the Internet anymore, and I take great pride and consider it my good fortune to have served on committees like the Security Directorate, reviewing contributions to ensure that they don’t break the security of the Internet. However, I find myself less distraught as I try to serve the Internet through other practical contributions from outside the fences of the standards organizations. (I’ve also had my share of experiences at other standards organizations like the IEEE, 3GPP and 3GPP2.) So, why did I actually stop contributing to standards definitions? The primary one is the fact that while the pace at which standards are written hasn’t changed in many years, the pace at which the real world adopts software has become orders of magnitude faster. Standards, unfortunately, have become the playground for hashing out conflicts and carrying out silo-ed agendas and as a result, have suffered a drastic degradation. Consider the Internet of Everything (IoE), one of the hottest topics of today. The Internet of Everything, you say? Surely, this must be built on interoperable standards! How can you possibly be talking to everything, from lightbulbs to toothbrushes to phones without interoperability? That sounds absurd! And you would be right; there is a need for interoperability. But what is the minimum we need? Is it IP? Is it some link layer defined by IEEE, such as 802.15.4? Or Bluetooth 4.0? HTTP perhaps? It is useful to remember that none of these are fully sufficient to have IoE working in a meaningful way that is of some use to the user or the end consumer. And yet, while we wait on some inevitable PHY (physical) and MAC (link layer) protocols that must be defined by IEEE, once that is in place, we are ready to roll. Running code and rough consensus, the motto of the IETF, used to be realizable at some point. Nowadays, it is as though Margaret Thatcher’s words, “consensus is the lack of leadership” have come to life. In the name of consensus, we debate frivolous details forever. In the name of patents, we never finish. One recent case in point is the long and painful codec battles in the WebRTC working group. I have tremendous respect for a good number of people that participate at the IETF and other standards organizations that continue to make the Internet functional and sound. I value interoperability and hope that we will get it together for sake of IoE, because it actually is going to be hard to realize that vision without good interoperability. But I look across the board at IEEE, IETF, SDN organizations and the like and feel that these organizations need a radical restructuring effort. They need to be shaken up, turned on their heads and revamped in order to have monumental impact on the Internet once again. For one, we all need to agree that everyone gains from the Internet being unencumbered and that interoperability only helps the Internet serve all our needs better. More critically, I believe it is time to revisit the tradeoffs between consensus and leadership; they absolutely should not be considered to be one and the same. This will be tricky and will require careful balancing of undesirable control and faster decisions. Most likely, a change like this will require a complete overhaul of the leadership selection process and structure. But, without this rather drastic shake up, I’m afraid we are widening the gap between standards and reality. The world is marching towards fragmented islands of communication connected via fragile pathways. It is inevitable, as this is the fastest path to market. Unless these standards organizations make radical shifts towards practicality, their relevance will soon be questionable. For now, some of the passionate people will go off and try to make real things happen elsewhere. I feel like a loser for saying “I quit writing standards”; kudos to the people that are sticking with it to make the Internet a better place. Some day, hopefully, we will all be better off because of it! Vidya Narayanan is an engineer at Google. With a long history in mobile, she is obsessed about enabling amazing mobile experiences. She blogs at techbits.me and on Quora. Follow her on Twitter @hellovidya. Featured image courtesy of Shutterstock user almagami

Read More...
posted 8 days ago on gigaom
There is an almost-secret battle going on behind the scenes of the mobile platform wars; and that is the battle for mobile browser market share. What makes the battle to become the dominate browser on mobile different than on the desktop is that the battle lines are drawn predominately along device and platform boundaries.  When you take a closer look at the race to become the top browsers on the iOS platform, you will find is that it is features rather than speed that users are choosing. Mobile browser market share While comScore data may show that more people are using their mobile devices than they are using their personal computers, this does not seem to apply when it comes to browsing the web. Looking at data collected from StatCounter: 24.9 percent of all web traffic is coming from mobile devices in April 2014. This is up from 13.9 percent in April 2013. While this does show that mobile browsing will likely overtake desktop browsing sometime in the future, it has not happened quite yet.  Any time markets grow this fast, there will inevitably be competition and a race to the top. When it comes to mobile browser market share, the dynamics of changing market share is indicative of desktop browsers wars of the past. Looking at the top 9 mobile browsers from the last 12 months, you can see that Chrome is fast becoming the dominant browser across all of mobile, climbing from 2.29 percent in April 2013 to 13.59 percent in April 2014, overtaking Opera in the number 3 position according to StatCounter. Benchmarking results on iOS When choosing which browser to use on iOS, the following data shows is that it can not be performance that is the driving factor. This is interesting as browser speed continues to be one of the major factors influencing which desktop browser to use. For the benchmarking tests, the iPad version of each browser was used on an 128GB iPad Air running the latest iOS 7.1 update. Three different test suites were used to test the performance of the nine different web browsers;Sunspider v1.0.2, Octane 2.0 and V8 Benchmark Suite v7.   Looking at the results, you can see what Jay Sullivan, Mozilla’s vice president of product, was referring to back in March of last year. You may recall that Mozilla pulled its Firefox Home app from the App Store and halted all development of a iOS browser due to the fact that Apple restricted third-party browser developers to using the UIWebView rather than there own rendering and javascript engines. As a result almost every third-party browser tested lags behind Apple’s own Safari mobile browser where performance is concerned. The results show that each browser, including Google’s own Chrome browser, perform at nearly identical performance levels. That is, until you look at the results coming from the Puffin mobile browser for iOS. Puffin outperformed Safari in all three tests. Another notable exception was the fact that Opera was unable to complete any of the benchmark tests. Seeing as how Opera for iOS has not been updated since October of 2012, it is no wonder that it could not execute any of the latest tests. Uniques features drive choice Puffin Web Browser ($3.99, Universal) has been able to achieve its wicked fast performance on iOS due to the fact that it is not running on iOS. Puffin is a browser that utilizes cloud-computing to render web pages. Not only does the cloud behind Puffin make Puffin a fast performing browser, it also allows Puffin to support Adobe Flash Player 11.9. To help users utilize flash sites that were originally built for the mouse, Puffin has a virtual gamepad when playing online games built with Flash, as well as a virtual trackpad that simulates all mouse operations like a personal computer. Puffin allows you to change your user agent setting which makes it a good browser choice when you are trying to replace your personal computer with your iPad. While it can sync your browser tabs with Chrome using your Google account, it does not sync your bookmarks or history. Google Chrome for iOS (Free, Universal) definitely has its appeal to users that are using the desktop version of Chrome, and there are a lot of users using Chrome on the desktop. Chrome is the dominant browser used on the desktop with a commanding 46.49 percent share on StatCounter. Being able to sync your history, bookmarks and tabs across all of your devices and desktop can certainly be more important than having the fastest browser. Google really has done a great job at integrating their online services into the apps that they build for iOS. Many third-party apps now support “Open in Chrome” as one of their supported sharing options. Safari Mobile (free, Universal) can sync your bookmarks, reading list, open tabs and history with all of your other devices, including Safari on OS X. What you may not know is that you can sync your bookmarks with Internet Explorer, Firefox, or Chrome on Windows using the iCloud Control Panel 3.1 for Windows. To do so you do need to create an iCloud account, but you do not have to use iCloud’s email services. In fact, you can use your any email address when creating the iCloud account that you want to sync your bookmarks with. That way you can use Safari’s fast browser on your iOS device, and any browser on your Windows desktop. iCab Mobile Web Browser ($1.99, Universal) has one unique feature that may appeal to anyone that shares their iOS device with others. It can support multiple users on the same device.  With iCab you can add accounts that maintain their own preferences, profiles, and browsing history. Like Chrome, iCab has also done a great job when it comes to partnering with other third-party developers that supporting iCab as your device browser of choice. It also has enhanced support for filling lout forms online as well as uploading and downloading content from the web. Dolphin Browser (Free, iPhone Free, iPad) has extensions for Safari, Chrome and Firefox that enable you to sync history, bookmarks, passwords and open tabs on your devices and your desktop that it calls Dolphin Connect. It has its own integrated voice search, Sonar, that you activate by shaking your device. You can also use gestures to launch your favorite URLs. If there happens to be another Dolphin user near by, you can quickly share a link with them using the WiFi broadcast feature. When it comes to creating a rich set of unique and innovative browsing experience, Dolphin has really outdone itself.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.A demographic and business model analysis of today’s app developerDevelopment strategies for the app-developer communityCES 2012: a recap and analysis

Read More...
posted 8 days ago on gigaom
RightScale has been managing public cloud resources for companies since TK, and thousands of users later, it appears to have a pretty good sense of what’s happening in the cloud space. Recently, the company released its State of the Cloud report, including a survey of more than 1,000 companies about the clouds they’re using and plan to use. Kim Weins, RightScale’s vice president of marketing, came on the Structure Show this week to talk about the results. Here are the highlights of that interview, but you’ll want to hear the whole thing for all of Weins’s thoughts on which cloud platforms are most popular (as well as for my and Barb Darrow’s takes on the week in cloud and big data news). And if you’re really into learning about the future of cloud computing — the business models and the architectures — make plans to attend our Structure conference June 18 and 19 in San Francisco,. It features a who’s who list of cloud executives, architects and users from companies such as Google, Amazon Web Services, Airbnb and more. Download This Episode Subscribe in iTunes The Structure Show RSS Feed VMware: Lots of products, lots of lock-in and lots of interest Of course, most respondents of the RightScale survey were using Amazon Web Services. However, Weins explained: “If you look within the enterprise segment … we saw that the vCloud Hybrid Service from VMware came in No. 2. That surprised us a little bit and we were a little bit suspicious of that for two reasons. One is, it’s a pretty new service … . And the second is that people get often very confused about the different VMware products and which ones they’re using. We call it ‘vSoup.’ They’re not sure. You put ‘v’ in front of something, and if they’re using anything VMware they say ‘yes.’” When RightScale did some follow-up calls to determine whether respondents actually were using vCloud Hybrid Services, it found that more than half were experimenting with it, some others thought they were using it, and some others were just confused about which VMware products they were actually using. And as RightScale moves more into managing private cloud environments, as well, Weins said it’s seeing a lot of interest from customers that want to turn their vSphere servers into a cloud. So, RightScale has developed a lightweight appliance that helps “cloudify” vSphere so it can be managed as part of the RightScale service. However, she added, as much interest as their is from VMware shops who want to bring that trusted environment with them into the cloud, there’s also a concern: “What we’re actually seeing more of there is people who are concerned as they move to cloud, they don’t want to be in the all-VMware, all-the-time-forever camp. They want to preserve their options. They want to know that they’re not locked into always using everything VMware, whether it’s the hypervisor or other services, because they know that that’s a costly option. So I think that people are being very cautious about how they dip their toes in the water there.” OpenStack: Yes, it might matter “Definitely a a lot of interest. Definitely a lot of interest,” Weins said of OpenStack. “In the private cloud world, they’re No. 2 really in adoption so far in terms of people running applications, but they have the most in terms of people experimenting or planning to use it.” No. 1? VMware. If RightScale’s data is indicative of the IT world at large, that puts a lot of pressure on the OpenStack community to gets its act together. “I think the one question mark is will people overcome the learning curve next year associated with OpenStack and the complexities of implementing it,” Weins said. “… I think the jury is still out whether a lot of those experimenters are going to take the leap in the next year or two, or if it’s going to take longer.” What of Google, Microsoft and the telcos? In the RightScale survey, Microsoft Azure and the Google Cloud Platform — the platform-as-a-service and infrastructure-as-a-service options — both had more people interested in using them than actually using. But that interest level is very high for both. “It was very interesting to see the interest in the PaaS options, both from Google as well as from Azure,” Weins said. “… Now the difference between those two players is that within the larger companies Azure was stronger in terms of mindshare, and within the small and medium-sized companies, Google was stronger in terms of the mindshare.” As for telcos like Verizon, AT&T and CenturyLink, which have invested heavily in their cloud services and are often suggested as natural fits to dominate enterprise cloud workloads, well … Weins said RightScale occasionally comes across telco users interested in using its management service. In the survey, “a handful of people” mentioned those providers in the “Other” category.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How software-defined networks can meet the needs of IT organizationsMigrating media applications to the private cloud: best practices for businessesThe Structure 50: The Top 50 Cloud Innovators

Read More...
posted 8 days ago on gigaom
When you think of Android, you probably don’t think of Amazon. You should. Amazon has slowly built up a product line off of the open Android software, starting first with Kindle Fire tablets and more recently with its Fire TV set-top box. The third pillar of “fire” is shaping up to be Amazon’s smartphone with an expected June announcement. There’s no official name for the handset yet — I like the Fire Fone, but that’s just me — which has long been rumored. Sources told the Wall Street Journal this week that Amazon’s phone will go on sale three months after its introduction, meaning a September launch month. Few details have leaked, save for the recurring rumor of multiple eye- and head-tracking cameras in the phone and a glasses-free 3-D screen.The former makes more sense to me than the latter as Amazon can better learn where consumers are looking when browsing products on Amazon’s website. While no other details surfaced this week, I’d be shocked if the phone ran anything other than Android. It simply makes sense given that Amazon already has a mobile app store filled with 200,000 titles for its Fire OS tablets; surely the phone too would run the same software. And that does nothing to help Google because Amazon’s fork of Android doesn’t include any Google services. Amazon, not Google, reaps all the rewards of gathering personal data from its devices. Amazon Instant Video on iOS devices I’m also expecting Amazon’s phone to have another key difference from today’s currently available Android handsets: Amazon Instant Video. Amazon released that app for Apple iOS devices but never for the Google Android platform. Of course, Android has its own share of “exclusives”; the upcoming Android Wear smartwatches won’t likely work with Amazon’s phone. Instead, you’ll probably need a Google Android device for the LG G Watch or Moto 360 when they arrive in the next few months. Those thinking these smartwatches would be incredibly expensive got some good news this week: LG confirmed a £180 price for the G Watch in the U.K. That suggests a price of under or near $200 for the watch in the U.S. as device prices typically don’t get converted by currency. At that price, the G Watch would fare well against other contenders for your wrist, such as the $249 Pebble Steel. Samsung too has wearables that work with Android phones — if they have the Samsung name on them — even if the smartwatches themselves don’t run Android. I’m currently taking the Gear Fit for a spin and shared some preliminary thoughts and details on the device. I think the hardware is outstanding but Samsung has room for improvement on the software side. The company seems to have less of a challenge with the Samsung Galaxy S5, however, which gained mostly positive reviews this week.   The phone is now available for sale and I have a review unit in hand so I’ll have more to say in the coming days about the Galaxy S5 and Gear Fit. I already have little doubt that the hardware will leave people lacking: The phone is fast, takes excellent pictures so far, and has a fantastic display. Samsung has listened to consumers (and even some reviewers) who suggested the interface on the prior model was a bit clunky and non-intuitive. I’ll find out how much better the new phone is and share details.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How companies can grow by moving into newer, bigger marketsHow new devices, networks, and consumer habits will change the web experienceOpportunities for living room application platforms

Read More...
posted 8 days ago on gigaom
Today’s package is whimsical, mostly because I am in that kind of mood. One Mad Men episode worth watching from each season that is worth rewatching. Thanks New York magazine for basically making sure I don’t do anything before Mad Men mania sweeps over me & Twitter. Why we are in a new gilded age. Paul Krugman reviews Capital in the Twenty-First Century by Thomas Piketty. Letterman’s last great moment: Outside of John Gruber’s Mac-related stuff,  Bill Simmons’ pop culture commentary is must read for me, and this piece about David Letterman doesn’t disappoint. Just cheer, baby: The life of a cheerleader isn’t fun and games. The hard life leads to a Raiderette suing the football team. Is there anything beyond quantum computing? Scott Aaronson tries to answer the question. More time is better than more money, says Kevin Kelly. I agree with him, but only when I have enough money in my bank. Pimco’s Bill Gross picks up the pieces. Sheelah Kolhatkar tells the story of the investing legend who has been dealing with negative press following the exit of his CEO Mohamed El-Erian.

Read More...
posted 9 days ago on gigaom
After months of rumors, Amazon’s smartphone ambitions are reportedly set to take shape in June. That’s when the company will introduce its smartphone, according to a Wall Street Journal report published Friday. Amazon’s phone is expected to have multiple cameras and a glasses-free 3D experience when it goes on sale in September. Much of the Journal’s report reiterates prior leaks, so there’s not much new information here save for one of the most important aspects: An actual release date, or at least the months of Amazon’s phone announcement and launch. As far as those cameras? They’ll “employ retina-tracking technology embedded in four front-facing cameras, or sensors, to make some images appear to be 3-D, similar to a hologram,” said the Journal’s sources. A September sale would likely pit Amazon directly against a new iPhone (or two) when vying for consumer purchases. Unlike Apple, however, Amazon typically doesn’t seek to earn profits from hardware sales but instead offers devices at lower prices and make money from related software, services and goods sold through Amazon.com. The Journal’s sources said that Amazon has been showing off early releases of the phone hardware to developers, likely to build interest. The company already woos developers to its Amazon AppStore, which hosts modified Google Android applications that run nicely on the company’s Kindle Fire tablets. I suspect Amazon will continue to build upon the open-sourced version of Android for its phone, just as it does with the Kindle Fire and new Fire TV. Doing so keeps software development costs down as the AOSP, or Android Open Source Project, offers the basic building blocks of smartphone software for free. In fact, with the Kindle Fire tablets, Amazon already has done much of the software work that’s needed for a phone. There’s a browser, email app, and support for third-party software. Adding cellular radios and a corresponding phone application isn’t a simple task, but the heavy lifting has already been done. One bit of software I anticipate will surely be on Amazon’s phone is Amazon Instant Video. Although nearly any Google Android device can play music through Amazon’s MP3 player or show e-book content in the Amazon Kindle app, not a single Android phone or tablet currently supports movies or television content through Amazon. The company has never released a version of Instant Video for Android, so keeping it for its own phone will certainly stir up a little demand.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.Takeaways from mobile’s second quarterThe evolution of consumer-media cloud storage4 takeaways from Google I/O

Read More...
posted 9 days ago on gigaom
As regulators attempt to sift through the possible public harms and benefits of Comcast’s $45.2 billion plan to buy Time Warner Cable, we thought it was worth showing that if the deal takes place it could lead to a significant jump in the number of broadband subscribers getting a data cap. If we add Time Warner Cable’s 11.6 million broadband subscribers from the end of 2013 into the mix of customers with caps, the total percentage of U.S. homes that have some type of cap or other limit on downloads rises to 78 percent up from 64 percent today. That’s a significant jump, especially after the number of homes with caps plateaued after 2011 when AT&T hopped on board the bandwagon that Comcast started driving in 2008. A side note for data nerds: The percentage of capped consumers could be a bit higher because the Leichtman Research Group data we use to calculate subscribers only accounts for 93 percent of the total number of broadband subscribers. Now, it’s not to say that we will definitely reach that 78 percent, given that Comcast has pledged to divest itself of 3 million pay TV subscribers in order to help get the deal through regulatory screens. However, it’s unclear which markets might be divested and whether or not those markets would go to a buyer that also has a cap. Of the major cable providers in the U.S. only TWC and Cablevision don’t have caps. And even if you take out those 3 million broadband subscribers entirely, we’re still looking at 74 percent of the U.S. broadband subscribers hitting a cap. As a Time Warner Cable customer who currently doesn’t have a broadband cap, I can’t say that I view this deal as a good thing. I imagine that the 10 to 13 percent of U.S. homes who would join the capped majority would feel the same. There’s still time for the FCC to take a harder look at caps — or as Comcast calls it, a data threshold. For those who want to see who’s capping their broadband, check out our chart from November 2013.Related research and analysis from Gigaom Research:Subscriber content. Sign up for a free trial.How consumer media consumption shifted in the second quarterWhat the shift to the cloud means for the future EPGHow the truly smart home could finally become a reality

Read More...