posted about 5 hours ago on gigaom
The battle for our mobile attention has put convenience and user experience on a pedestal and, as a result, a ton friction has been removed from our day-to-day activities: shopping, ordering food, banking, traveling… But it often seems that, to gain access to these conveniences, we have no choice but to wade through complex waters, aka mobile service providers. With two-year commitments, opaque plans and fee-tangled bills, many of the carriers that enable today’s celebrated mobile innovations are not, themselves, widely celebrated. This is why consumers might want to pay closer attention to MVNOs (mobile virtual network operators). MVNOs give you access to the major US mobile carrier networks (AT&T, Verizon, Sprint, T-Mobile, etc.), but with their own pricing and packaging. For customers, that can translate to lower rates, more flexible contracts and better customer service. Representing a smaller segment of mobile subscribers (just one in ten US subscribers a few years back) and competing against powerhouse brands, MVNOs are the underdog in the mobile service space. This is why when a new provider enters the fray they need to work hard to stand out. This is the challenge for the newly announced Tello. Tello runs on Sprint’s network in the US, but has been operating out of the UK for two years already. (The parent company, KeepCalling, has been around since 2002.) At the core of Tello’s US offering is a pledge for “No Fees, Whatsoever,” as in no activation fee, no overage fee, no processing fee or early termination. Tello plans can be fully customized, so you’re not paying for something you don’t use (fitting for those who don’t use their phones to, you know, make calls), and can be upgraded or downgraded easily if you find, for example, you’re hitting your data limit. Technically data is unlimited as speed is throttled down to 64kbps once you hit your limit. Things that are also good to know, if you’re thinking about switching providers, is that you can choose to buy a phone from Tello, but there’s also the option to bring your own. As mentioned before, Tello is a contract-free service, but for those who prefer to avoid plans altogether, Tello has a Pay As You Go option that gives you the chance to buy let’s say $5 and use it for national and international calls or texts. Of course, Tello isn’t the first provider to tackle the pain points of mobile service. Ting, for example, offers a plan that allows you to pay based on usage and carriers like T-Mobile often cover the cost of termination to facilitate switching. But scratching out every fee and keeping costs low—Tello has a customizable plan that starts at $5 monthly, or (if you want data) $9/mo for 100 minutes of talk, 200 texts and 200 MB 4G LTE—gives the company a fair chance to stand out. Still, one of the bigger questions that comes to mind is, if MVNOs are presenting such competitive offerings, why aren’t a larger share of mobile providers using them? Is it a testimony to long term brand effects of TV advertising? Are consumers still tethered to brick and mortar, taking comfort in having a place to go if something goes awry? (Tello, for example, is exclusively online.) While these factors may have a big impact today, they may lose their foothold as new generation of cell phone users and cord cutters come to market. Tello may be ahead of it’s time—though given early adopters always seem to be ready for the next opportunity to assert their early adopter-ness, that may be an advantage unto itself.

Read More...
posted 2 days ago on gigaom
As the AWS Pop-up Loft closes after its most recent two-week stint, I thought I would catch up with Ian Massingham, AWS Technical Evangelist, to see how it had gone. To explain, the ‘loft’ is the ground floor of Eagle House, a converted office block on City Road which runs up towards Kings Cross from the heart of London’s tech start-up scene, Old Street and the Silicon Roundabout. The aim of the Loft — the clue’s in the term ‘pop-up’ — is to offer a temporary space to run an educational programme, aimed at organisations looking to use AWS technologies in anger. “It was never meant to be a long-term thing,” explains Ian. “We thought that by coming back periodically, we’d be able to connect with different cohorts of customers, at different points in their development.” There’s an “Ask the Architect” (think: Genius) bar, a co-working space and a room for sessions. Booths for support teams and training partners, who are on call to ask questions. The single-track timetable has been filled with back-to-back sessions on a wide range of topics, from IoT to Machine Learning, from introductory to deep dive technical, from shorter to longer formats, aimed at a variety of audiences. So, what were my take-away thoughts? Interestingly, these were less about the topics themselves, and more about how they were delivered. The model is simple: you register, you come, you learn, you have the opportunity to ask questions and participate in workshops, chalk and talk sessions and hackathons. It’s been intense, but that was the plan, says Ian. “We’ve learned a lot from previous pop-ups, on how to make the best use of people’s time.” Not least that the content — educational content, that is — is king. While this may appear self-evident, less clear is the importance that should be attached to providing a diverse range of materials. “You need to create the right interaction channels for different types of customers. While a large base of our customers expect to self-serve, others will want full support. And similarly some like to read documents, others like videos, others like classroom training, it’s up to us to be ubiquitous, so people won’t get unhappy even if the majority of content is not directly appropriate to their needs.” Secondary plus points concerned the location (“Yes, sure, the location is important, we’re right in centre of startup community”), the food (“Developers run on beer and pizza”) and so on but these were seen as hygiene factors for the pop-up. Formal feedback has not been collated but the signs are good that the key goal of the event, to “get people productive on the platform,” was achieved. As, if not more importantly was that people got what they wanted and more. “I was just told, ‘This is great, I love it, it’s so convenient to engage with your architects.’ ” The message, as I read it, was one that events of any size and scale could take away: whatever the format, make delivery of a range of excellent content, to fit a diverse audience, the primary goal. So, yes, context is important: nobody wants to travel to the back of beyond to attend an event of any form. But head and shoulders above this is the range and applicability of the content. If this appears obvious, it begs a question — why do so many events, held in far more glitzy and dare I say, exotic locations (sorry, Shoreditch) tend to forget this simple, yet important truth? Like the software developed without regard for its users, so should events focus first and foremost on meeting the needs of their attendees. If Amazon Web Services, purveyors of online platforms that depend heavily on the self-service model recognise this, then so should everybody else.

Read More...
posted 4 days ago on gigaom
Because Busybot and Slack look so much alike and are so tightly connected, I avoid the cognitive costs of switching. I’ve tried using work management tools like Asana in connection with Slack, and the results have been mixed, principally because — I think — there is a mismatch in the basic orientation of the tools: Slack is messaging centered, while Asana is task centered. In the case of a tool like Asana, when the Slack connection is used notifications are sent to a Slack channel whenever changes occur in the Asana workspace. For example, whenever a task is created, completed, or commented upon. A slash command (‘/asana’) lists tasks, and arguments to the command can lead to creating tasks, assigning tasks, and commenting on them. Asana integration in Slack But I confess that I have found this style of integration difficult. The two models of use — chat-based conversation in Slack and task-based coordination in Asana — don’t align for me, and the mapping from an Asana workspace to a Slack channel doesn’t always line up right. And I don’t necessarily want to have every tweak to a task dumped into the channel in Slack, per se. I don’t want that endless stream of noise, because Slack is noisy enough. I recently encountered a tool that takes a different tack. Busybot avoids the mismatch problem by operating in a parasitic way. By that I mean it relies totally on Slack’s architecture to the greatest extent possible. For example, there is no independent login: you use Slack’s login. And once logged in, the channels of the team that you sign into are duplicated as contexts for tasks in Busybot. Here’s the login: login for Busybot Here’s the #general channel for workfutures.io in Slack. You can see that I /invited busybot to the channel (I had already created the integration). Inviting and Creating a Task I typed a message to busybot, ‘ask Esko for a contribution’. If I had added [email protected] that would have assigned the task to me, as well. workfutures.io Slack team Over in Busybot, everything looks extremely similar: Task in Busybot On the left, the design of Slack is emulated, so that for each Slack channel there is an equivalent Busybot channel, where all tasks can be found. I’ve selected the ‘ask Esko’ task, and then the task pane opens. I’ve selected the ‘add checklist’ feature. Task Checklist I’ve added a single checklist item, but you can have as many as needed. Also descriptions, comments, deadline, and assignment of the task are available as metadata. The task list can be sorted, which is moot in this case, since there is only one task: Also note that the [email protected] option at the top opens all the tasks assigned to me, and ‘all tasks’ opens all tasks in the team, sorted by channel. Tasks can be added, edited, and deleted in Busybot, but can only be created and displayed in the Slack side of the integration, at present. I’ve been told by Busybot’s CEO and founder, Damian Bramanis, that various new features are coming, like multi-team functionality, new ways to groups tasks in views, and tags. Conclusions and Takeaway Busybot works for me despite the minimal degree of metadata, and I think the reason is the equivalence between the Slack and Busybot information models: I don’t have to switch gears mentally when I move from Slack to Busybot, or vice versa. It feels like I am in the same place, just looking at different attributes of the same system of information. Moving from Slack to Busybot feels like I am just zooming in on task details that are suppressed on the Slack side. Because the two ‘sides’ look so much alike and are so tightly connected, I avoid the cognitive switching costs of moving from Slack to non-parasitic tools, like Asana. Yes, I’d like to be able to do more with Busybot, though. For example, I’d like to be able to change task attributes on the Slack side, like adding a comment to a task, so that the text of the task comment would appear both in the Slack chat history and in the task comment thread. Damian tells me they are working on ways of accomplishing more sophisticated sorts of integration like that, perhaps with a /busybot command, or clever use of the channel topic (setting the topic to the name of a task, for example, so that commands could refer to that task). At any rate, I will be watching the developments at Busybot with close attention. Crossposted 1 May 2016 on workfutures.io. Update 1 May 2016 4:30pm: Several folks mentioned Swipes for Slack, as another approach to accomplish some or all of what Busybot does. I will review in another post.

Read More...
posted 6 days ago on gigaom
One of the nice things about the Internet Age being relatively new is that many of its earliest pioneers are not only still around, but still doing interesting new work. Among these titans, few loom as large as Bob Metcalfe. Inventor of Ethernet. Coiner of Metcalfe’s Law. Founder of 3com. Bob was there in the early days at PARC, and today you can find him at University of Texas promoting entrepreneurship and startups, and keeping his eyes open for the next big thing. When considering keynote speakers for Gigaom Change, an event about the present disruption of business through new technology such as AI and robots, I wanted to find someone who had seen a new technology arrive at the very beginning and then ushered it through to commercial success, and finally helped to make it impact the entire world. I had a short list of candidates and Bob was at the top. Luckily, he said yes. I caught up with him Monday, April 25, and all but ambushed him with a series of questions about the kinds of changes he expects technology to bring about next. Byron Reese: So I’ll ask you the question that Alan Turing posed way back: “Can a machine think?” Bob Metcalfe: Yes, I mean, if human beings can think then machines can think. And so, you believe we’ll develop an AI. Yes, absolutely. The brain consists of these little machines, and eventually we’ll be able to build little machines and then they’ll be able to think. Do you have an opinion on what consciousness is? It has something to do with attention. That is, focusing the activities of the thinking machine; focusing them in on a certain set of inputs, and that’s sort of what consciousness is. Do you think we’ll make conscious machines? Yes. An interesting case of consciousness is when the selected inputs, that is the ones selected for attention are internal, that is self-consciousness—being able to look down on our own thoughts, which also seems to be possible with some version of a neural net. Would a conscious machine have inalienable rights? Whoa! Do human beings have inalienable rights, I’m not sure. We claim we have a right to life and it’s generally regarded there are things called universal human rights. That’s a conflict of interest because we’re declaring that we have our own rights. Actually, it worries me a little how in modern day life, the list of things that are ‘rights’ are getting longer and longer. Why does that worry you? It just seems to be more a conflict of interest. Sort of a failure to recognize that we live in a reality that requires effort and responsibility, and ‘rights’ somehow is a short-cut, as in we have a ‘right’ to stuff as opposed to having to work for it. Do you believe that robots and AI will be able to do all the jobs that humans can do? I think so, I think that’s inevitably the case. The big issue as you well know is whether it’s man-versus-the-machine or man-and-the-machine, and I tend to come down on the ‘man-and-the-machine’ side of things that is, humans will be enhanced by their robots not replaced by their robots. So, some kind of human-machine synthesis like augmented memory and all of those sorts of things. Well, we have that already. I have the entire Google world at my disposal, and it’s now part of my habit when something comes up that can’t be remembered, I quickly take out my iPhone and I know what it is within a minute. You know, like, ‘Who was Attila the Hun,’ that came up the other day, and they can read the entire life of Attila the Hun within a minute. Although the interface between Google and my thought process is awkward between typing and reading. I can imagine eventually that we’ll have Google inserted in our head more efficiently. And then it won’t take 10 years to learn French, it’ll take just a few minutes to learn French because you’ll just ‘plug it in’. What do you think people will do in the future if machines and AI’s are doing all the things that have to be done. I don’t know. I guess, you know, a hundred years ago everybody knew how to milk cows—well, 40 percent of the population knew how to milk a cow. And now, you know, the percentage of people who know how to milk a cow is pretty small and there are robots doing it. And somehow all of those people managed to get employed in something else, and now they’re UX/UI engineers, or they’re bloggers or they’re data scientists. Somehow all those people stopped milking cows and they started doing something at a higher-level in Maslow’s hierarchy. There’s two potential problems with that though. One is if the disruption comes too quickly to be absorbed without social instability. Or, the second problem is in the past we always found things to do because there were things we could do better than machines could do. But, what if there’s nothing we can do better than a machine can do? Or are there things only people can do. You’ve wandered out of my area of expertise. Although, on the ‘happened too quickly’ front, as we’re seeing in Austin this week, the status quo can slow things down like the Uber-Lyft slow-down initiative here in Austin. We like taxis here in Austin rather than Uber and Lift apparently because they’re safer. What are you working on? Enough about the big issues, how do you spend your days? I spend my days on the big issues, and the big issue is innovation as a driver of freedom and prosperity; and the tool of innovation that I’ve settled on perfecting and promoting and professing is startups. Startups as vehicles—as innovation vehicles—and mostly coming out of research universities. So most of what I do is focused on that view of the world. Why did you choose startups as the mechanism of innovation? Because startups, in my experience, have been the most effective way to innovate. Everyone loves innovation as long as they’re not being innovated upon, and then as soon as they’re innovated upon they become the status quo, which is resourceful and nasty and mean. And, so the most effective tools in my experience against the status quo have been these startups, which at their core are champions of innovation. I got the word champion from Bob Langer at MIT; he believes these new technologies need champions, which is why he likes startups. A startup is a place where champions go and gather resources, coordinate their efforts and scale up. So, I guess it’s their effectiveness in having real impact with innovations that causes me to admire and profess startups. It’s interesting though that as much as what you call the status quo can slow down innovation, nothing can really ever be stopped can it? I mean, big whale oil didn’t stop kerosene and big kerosene didn’t stop electricity. The rate of advance can be slowed. The internet is old now, it started running in ’69. Just think how many years have passed, 50 years, to get where we are today. Is that fast or slow, by the way? I would say that’s very fast. We’ve had recorded history, and by that I mean writing, for 5000 years. We have only therefore had the Internet for 1% of recorded history. Are you overall optimistic about the future that all these new technologies and startups are going to usher in? Do you think it’s going to be a better future, or not? I’m a better-future believer, an optimist, and enthusiast. I think cynics are often right but they never get anything done. Just as a matter of choice, without assessment, I choose to be optimistic. Last question: Aren’t startups fundamentally irrational in the sense that the likelihoods of success are so small and the risk so high that one has to be somewhat self-deluded to undertake one? I ask this, of course, as someone who has done several. Maybe that circles us back to your big question before, maybe that’s what makes us humans, is that we need to delude ourselves to thereby make progress. Maybe robots won’t do startups because they’re too rational.

Read More...
posted 6 days ago on gigaom
Yes, I am breaking one of my own unwritten rules: putting two question marks in a post title. But this story warrants it, particularly since what I am writing about won’t get much play. Rick Osterloh, the former head of Motorola, left Lenovo a month ago, the owner of the brand. Google, you may recall, had acquired Moto and kept a pile of patents and an advanced technology group, and spun the rest off to Lenovo. Apparently, Sundar Pinchai, Google’s CEO, thinks that Osterloh is the one to make sense of the many, many hardware efforts that Google has found itself running. So Osterloh will be overseeing Google’s Nexus, Chromecast, laptops and tablets (Chromebooks and the Pixel C tablet), OnHub (the home router that is the camel’s nose under the tent flap of the living room), ATAP (the advanced technology and projects group, with efforts like Project Ara), and (drumroll) Glass. Yes, Glass. Remember Glass? I have said that putting Glass under Tony Fadell (CEO of Nest) would lead to its re-release as a formidable player in what is likely to be the next platform: augmented reality. Fadell is having a lot of trouble since Google acquired Nest, and Glass has remained in the shadows. Google is still best-positioned to bring AR to prominence with something derived from Glass. Maybe Osterloh is the one who’ll make it happen. But sooner or later, the next era of computing will arrive, and after that day all of us will be wearing Google Goggles — or something very like — and nothing will ever be the same.  

Read More...
posted 9 days ago on gigaom
I am at the OpenStack Summit here in Austin and the announcements and releases keep rolling out, illustrating that the growing OpenStack market has some real teeth, taking a bite out of the market standbys. Even so, there is still a great deal of fear, uncertainty and doubt around the viability of clouds built upon OpenStack. The real question here is if that FUD is unfounded for today’s emerging markets. That means taking a closer look at OpenStack is a must for businesses delving further into public, private and hybrid clouds. The OpenStack Project, which is now managed by the OpenStack Foundation, came into being back in 2010 as joint venture between NASA and RackSpace Hosting, with the goal of bringing collaborative, open sourced based software to the then emerging cloud market. Today, the OpenStack Foundation boasts that some 500 companies have joined the project and the community now collaborates around a six-month, time-based release cycle. Openstack,, which is basically an open-source software platform designed for cloud computing, has become a viable alternative to the likes of Amazon (S3, EC2), Microsoft Azure and Digital Ocean. Recent research by the 451 Group has predicted a 40% CAGR, with the OpenStack Market reaching some $3.5 billion by 2018. Enough of a market share to make all players involved take notice. However, the big news out of the OpenStack Summit Austin 2016, comes in the form of product announcements, with more and more vendors aligning themselves with the platform. For example, HPE has announced its HPE Helion OpenStack 3.0 platform release, which is designed to improve efficiency and ease private cloud development, all without vendor lock-in problems. Cisco is also embracing the OpenStack movement with its Cisco MetaPod, an on-premise, preconfigured solution based on OpenStack. Another solution out of the summit is the Avi Vantage Platform from AVI Networks, which promises to bring software-defined application services to OpenStack clouds, along with load balancing, analytics, and autoscaling. In other words, Avi is aiming to bring agility to OpenStack clouds. Perhaps the most impressive news out of the summit comes from Dell and Red Hat, with the Dell Red Hat OpenStack Cloud Solution Version 5.0,  which incorporates an integrated, modular, co-engineered, validated core architecture, that leverages optional validated extensions to create a robust OpenStack cloud that integrates with the rest of the OpenStack community offerings. Other vendors making major announcements at the event include F5 networks, Datera, DreamHost, FalconStor, Mirantis, Nexenta Systems, Midokura, SwiftStack, PureStorage, and many others. All of those announcements have one core element in common, and that is the OpenStack community. In other words, OpenStack is here to stay and competitors must now take the threat of the open-source cloud movement a little more seriously.      

Read More...
posted 13 days ago on gigaom
When it comes to developing a successful mobile strategy, and building a long-lasting relationship with customers, a CMO is often faced with difficult considerations around the best way to measure success. The process of creating an app and investing significant amounts of money acquiring users is no longer enough to remain on the ‘first screen’ of any given mobile device — which is where any organization ultimately needs to be. It’s become necessary for teams to focus their efforts on techniques and campaigns that won’t only secure installs, but will maintain loyal relationships with mobile users. There has been ample research conducted on determining ROI on the vast amounts that businesses invest in acquisition. Now though, it’s becoming increasingly apparent that the same attention should be given to money spent post-install as well. Features of an engagement strategy such as push notification campaigns, in-app messaging, user-experience A/B testing, are all techniques you’ll need to invest in to help deliver successful mobile relationships. Now all you need to do is demonstrate that there is a greater need for money spent here rather than elsewhere… Image source So, if you’re the CMO in this situation, how do you prove this effectiveness and need? Well, after adopting some form of mobile marketing platform to handle this task, you would hope that your mobile analytics will change. You may see improvement in your engagement, retention, and ultimately, your revenue numbers. Perhaps obviously, this is the first and easiest way to consider ROI. Once you get a grasp on it, and you begin to see these numbers change, calculating ROI is relatively easy. Think of it this way — if we grow a metric like average revenue per user (ARPU) from $5 to $10 using a marketing automation program, and we have 1 million monthly active users, then we can put $5 million per month into the credit column. If the total monthly spend on the program amounts to $100,000, then that will result in a very (very!) satisfactory 900% ROI. Granted it won’t always be ARPU that we’re measuring, but in the vast majority of cases, there will be metrics with which we will measure mobile success, and once we add a quantifiable value to these, we’ll be able to establish decent ROI estimates. The Campaign Level Another, perhaps more reliable, way to measure ROI is to focus specifically on individual campaigns. Doing this will allow you to measure the effect of any changes within specific campaigns and sum them to provide a total benefit. Assuming that you’re using a good marketing automation platform, you should get clear results from each individual campaign, against whichever metrics you choose to use, and compared to the control group in order to isolate for other variables. By combining these multiple campaigns, we have a cumulative benefit that can be used to calculate ROI on the overall spend. Of course this approach won’t necessarily enable you to take account of some benefits, such as the effect an overall improved experience can have on word-of-mouth – but it’s probably better to be conservative when calculating ROI anyway. One thing that is vital to remember: don’t go looking for evidence of ‘good results’ after you’ve ran the campaign. Human nature being what it is, you’ll probably find some. The key is to first identify the metrics that you want to have an impact on, and the effect that you hope to have before you implement the campaign.

Read More...
posted 16 days ago on gigaom
It’s no secret that Application Performance Monitoring (APM) is becoming a critical competency in today’s enterprise networks. After all, so many enterprises are moving to a cloud based model that leverages tiers of service, which brings unforeseen complexity into the task of keeping things running smoothly. Traditionally, most IT managers have thought that application performance was a direct result of the static resources available, focusing on elements such as processor performance, available ram, and perhaps the traffic saturation associated with the Local Area Network (LAN). Although monitoring those elements still remains critical for providing adequate performance, the APM game has changed, or more appropriately evolved, to something that must address much more that the status of the basic hardware that makes up a network. That change (or evolution) has been driven by the adoption of technologies such as cloud services, hybrid cloud deployments, mobility, content delivery networks (CDNs), hosted databases and so on. Those new technologies have changed the basic dynamic of how an application is delivered to an enduser and how the endpoint interacts with the data associated with the application. A good example of that comes in the form of a line of business application delivered by a cloud service, where a hosted application server delivers the application to an endpoint via a browser connection and the associated data is stored on a hosted database and the connectivity to the application and data is provided via the internet over a VPN. In that situation there are multiple elements that have to work in concert to provide acceptable application availability and performance, and any one of those “tiers” can have an effect on the application. What’s more, any single tier can impact any other, especially when virtualization is involved or a software defined solution (SDN, SDDC, SDS, etc.) underpins operations. Take the above example and apply it to the real world, where an IT manager gets a trouble ticket forwarded from the help desk that simply states “user is complaining of a slow application”.  For that IT manager, the big question becomes where to start. Using the traditional approach, the starting point becomes taking a look at the hardware and network. However, that approach is all but useless in today’s world. Today, the IT Manager must turn to an APM platform to track down a problem and getting the proper intelligence out of that platform is a critical component for successfully remediating any application performance problem. That said, the typical APM platform is little more than a measurement and reporting tool, it will assist an IT manager in solving the problem, but that IT manager must have an understanding of how the tiers of a hybrid, cloud served network delivers an application. An understanding that brings us to how the OSI model can serve as a tier template for application delivery. If you think about the seven layers of the OS model and what each is responsible for in the realm of network communications, it becomes clear how some of those layers can be mapped to the tiers of application delivery. The OSI Model is broken out into layers, which has specific functions. Each of those functions map directly to the movement of information across a network. If you align the basic concept with APM, it becomes clear how a symbiotic relationship is formed between application delivery and the constructs of the OSI model. Below is a Citrix based example. When comparing the two models, it becomes evidently clear that the OSI model is intertwined with the best practices of APM troubleshooting. The question here becomes one of “how well do IT Managers understand the implications of APM and how the understanding the OSI Model becomes a critical competency for success. For more information on the best practices for APM, please take a look at a webinar I participated in for eG Innovations, which can be found at http://www.eginnovations.com/webinar/application-performance-monitoring-and-management-solution/eg-enterprise-monitoring-tool.htm.

Read More...
posted 20 days ago on gigaom
It seems like millennia since hotel chains have been trying to tailor new ‘experiences’ that line up with carefully researched millennial leanings. Now, after many attempts to create more social shared spaces, new aesthetics to counter the old-school tastes of Boomers and Gen Xers, and other supposed innovations, we are seeing some new takes that ditch the millennial psychobabble and which really try to get at what is emerging as travelers’ real desires. Hyatt has launched a new Centric line of hotels, which feels like a serious departure from the adjective-laden attempts to get at the psyche of business and leisure travelers, and which instead just gets out of the way. A brand video refers to guests as “Modern Explorers” and “wish-listers.” The ‘lounge-centric’ design reminds me of the Ace Hotel in NYC, where guests and locals interact in a library-inspired setting. “We call them Modern Explorers because these are travelers who are very curious, very independent, and very time crunched,” says Kristine Rose, VP of brands, Hyatt. “They have a wish list and they really want to make the most out of all of their experiences and reasons for traveling.” These travelers want to be in the center of the urban experience, to interact with locals: local people, local food, local attractions. The restaurant is called‘the Local Bar and Restaurant’ and will feature local dishes served up for the ‘casual foodie’. I can attest to the attractiveness of the Centric concept to non-millennials, since I am a late Boomer, and the practicality of ‘the essentials done right’ combined with a deeply local orientation could be the definition of a cure for the experience I have regularly when traveling, even in luxury hotels. At the other end of the spectrum, Hilton is also working away at trimming out the inessential, however in the new Tru hotels, they are cutting out business-oriented amenities like desks, and targeting the budget conscious leisure traveler. As the company says, “Tru by Hilton is a category disrupter. It’s built on a belief that being cost conscious and having a great stay don’t have to be mutually exclusive. Tru focuses on what matters most to guests, with a hotel that is more than just a place to sleep, it’s a true travel experience.” Hilton is running Tru as 100% franchise operation, with systems designed from the bottom up to cut operational costs, and leading to a $75-$90/night price point. This an effort to appeal to people that might otherwise turn to Airbnb for accommodations, but who’d really rather a no-frills hotel, so long as quality reaches some minimum. So what is the deep trend? Modern travelers want no fuss, easy in-and-out hotels that meet some promise of quality at a price — at various tiers — but that appeal to their desire to explore the hotel’s locale rather than remaining cooped up private rooms or stodgy same-old-same-old eateries. A return to simplicity: a night’s stay and off you go!

Read More...
posted 20 days ago on gigaom
Perhaps BITCOIN’s greatest gift to the web is not the disruptive nature of a digital currency, but the platform used to build that distributed, worldwide, decentralized crypto-currency. BITCOIN’s platform, often referred to as a blockchain, uses an innovative approach to keep transactions secure, validate ownership and guarantee provenance. A blockchain consists of a distributed cryptographic ledger shared amongst all nodes participating in the network, where every successfully performed transaction is recorded and shared. In other words, blockchains are proving to be a fully auditable, incorruptible database that can deny any known hack or attack. Although the importance of the blockchain is often lost amongst the discussion of digital currency, blockchains have the potential to disrupt how the internet itself works. Simply put, there is a lot more to blockchains than just crypto-currency and monetary applications. Truth be told, a blockchain is a decentralized ledger protocol (and/or platform) that can govern both financial and non-financial types of application states. A blockchain can be used to power decentralized business logic, which is contained in a cryptographic “element” that has intrinsic value and can only be unlocked if certain conditions are met. The business logic executes on a blockchain platform (a decentralized cloud service) once an automatic process validates that the terms and conditions set forth by participating parties are met. Those concepts can be used to fuel the creation of P2P (peer-to-peer) or decentralized network solutions that allow virtual communities to create secure, auditable and hack proof services and applications. Ultimately, blockchains may reign supreme in distributed, shared environments that are used by both open and closed digital communities – a concept that has been well vetted by BITCOIN as its secure methodology to handle currency. However, that leaves one question – how does one build a blockchain and create a community that can use it? One answer comes in the form of an open source platform that goes by the moniker of Ethereum, which is touted as a platform for decentralized applications, but in reality has become a catalyst for building blockchain based solutions.  Etherium leverages the blockchain ideology by providing both the platform and the development tools to build blockchain based community solutions, which are decentralized in nature and highly resilient, while also being incorruptible. However, the ideology of a crypto-currency still permeates the Ethereum platform, yet it does not have to have any monetary value. In Ethereum’s case, that crypto-currency is more appropriately referred to as a cryptofuel, which Ethereum has dubbed Ether. Ether is used to power transactions, pay for computational steps and democratize the distributed platform. Without Ether, distributed applications could fall prey to infinite loops, excessive data consumption and many other problems that could effectively destroy a decentralized application and applications are the key component of a community. However, to use Ether as a cryptofuel to power a creative process, one must embrace the Ethereum platform, which means there has to be much more to Ethereum than a blockchain and crytpofuel. To that end, Ethereum has created a development environment called ETH/DEV, which offers IDEs, tools and resources used to build decentralized applications. Those applications can be fueled by Ether and therein lies the most important element of the blockchain. The blockchain itself keeps track of the fuel units (Ether), and transactions can be assigned a cost of ether units, or even Ether payments, making all interactions transaction based. Ether does not need to have a particular monetary value associated with it – Ether could be based upon reputation points earned, contributions to the community, or any other activity that adds or uses some type of value measurement. For some community projects, content or code contributions may be the key element for earning Ether, which can then be used by the person earning the Ether to “purchase” other elements from the community or escalate content or reach out to new audiences. The blockchain comes into play be creating and maintaining the ledger of who has how much Ether and how that Ether was earned, spent or transferred. In short, the applicability of Ether is limitless. The concept of using a crypto currency like Ether brings many possibilities to light – for example, digital contracts can be secured using Ether and then recorded in perpetuity via the blockchain. What’s more, Ether can be traded for services, software and other virtual elements, creating an economy based upon distributed applications. One thing is certain, block chain technology is here to stay and organizations such as Ethereum are on the cusp of creating new decentralized solutions that eschew traditional borders and physical entities.  

Read More...
posted 20 days ago on gigaom
Paradoxically, here in early 2016, we are witnessing the lowest U.S. gas prices in years, but we are also moving toward a transportation era based on dramatically different economic premises, most obviously driverless vehicles. So it seems a perfect time to dig into the deep economics of cars, their impacts on city life, and what we can anticipate coming down the pike with the rise of driverless vehicles and smarter ways of living in cities once we can depend on AI-augmented transport. Perhaps there is nothing so pedestrian as parking, a global phenomenon that we generally take for granted along with many of the other externalized costs associated with car culture. The hard fact is that the typical car spends 95 percent of its working life parked. This means that little of the value of the car is actually realized. And, according to the AAA, the average cost of owning and maintaining a mid-sized car in the U.S. in 2014 was almost $9,000, of which $1,300 per year goes just to parking! Therefore we should not be surprised that parking is a $100B industry. This despite the fact that as much as 98 percent of car trips — at least in Los Angeles — start or end with free parking, according to the California Household Travel survey. Parking consumes a great deal of time, and according to Daniel Shoup, 16 studies from 1927 to 2001 found that 30 percent of the cars in congested downtown traffic were cruising to find parking, on average. He also notes that more recently, in 2006 and 2007, 28 percent of the drivers stopped at traffic signals in Manhattan and 45 percent in Brooklyn were searching for curbside parking.The average American takes four car trips a day, and if you figure two are commuting based, that can still translate into a half hour or more of looking for a space. We seldom think of how much of our cities are given over to cars, but in one study it was found that 14 percent of Los Angeles is devoted to parking. Barclay’s Capital reported that we could see a 60 percent decline in the number of cars on the road, but the impact on parking could be much greater. Obviously, the emergence of driverless vehicles suggests a great deal about the future of cities, and the impact on parking may be considerable. First of all, it’s clear that the on-demand car services like Uber and Lyft (along with car manufacturers like GM and Ford) have plans to provide driverless transportation to replace ‘drivered’ cars. That means that cars will not be parked after you get to the office, movie theater, or even grocery store. One study suggested that a single driverless car could replace up to 12 drivered cars. Instead of being parked at some destinations, the driverless car will simply move on to the next pick up. Another consideration is that driverless cars may be folded into municipal transport plans, like trains, buses, ferries, and bicycles, and not managed like taxis or on-demand cars services, at all. Even for those cars that are privately owned — which is likely to be a much smaller number considering how cheap Uberish services might be once the driver is out of the picture — driverless cars may be much more efficiently parked than human-managed ones, requiring dramatically less parking. Source: Curbed Los Angeles The frontiers of the future will be the ruins of the unsustainable. One thing that is bound to change is the way that municipalities require a great deal of parking allocated for new housing, based on historic patterns. That is likely to change very quickly, and will immediately lead to denser and lower cost housing, even before our cityscapes are dramatically remade. There are other very significant economic impacts that will arise from driverless cars. It’s been estimated that accidents will fall 80 percent in next 20 years as driverless cars sideline human drivers who are demonstrably terrible at driving. As a direct consequence, car insurance will plummet, with a drop from coverage of $125 billion in covered losses in the U.S. today, down to as little as $50 billion in 2040. But this is hard to predict, since we have no prior data. It could be much, much lower. Source: The University of Texas at Austin In the simulation above, we can get a sense of the driverless future. The white rectangles represent cars that have been ‘scheduled’ to pass through an intersection,while yellows have yet to be scheduled. But once scheduled, the cars are coordinated in their passage, so that traffic signals are not necessary, and the flow rate of the cars is much, much faster, without the need to stop once scheduled. The economics of frictionless traffic — without traffic lights, traffic jams, and built-in delays — is another large factor in the net savings from driverless transport. Living and working in the city of 2025 will feel totally different, and not just because there is no driver turning the wheel. It will be a totally foreign landscape, with little parking, no congestion, and much more space at street level dedicated to people, and with significantly fewer cars in view at any time. Driverless won’t mean carless, but cars will no longer dominate our cities as they do today. And I can’t wait.

Read More...
posted 20 days ago on gigaom
As I left the aspirationally named VR World Congress in Bristol, England (We just thought, “Let’s go crazy,” event founder Ben Trewhella told me of the 750-delegate event that started as a meetup), I found myself puzzling over a number of questions. Whether VR is going to explode as a technology platform, extending way beyond its gaming origins, was not among them. The number of potential use cases — enabling surgeons to conduct operations in the ‘presence’ of thousands of students, or architectural walkthroughs of new building designs — left me in no doubt. Equally, I have a firmer idea of timescales. While displays and platforms may have passed a threshold of acceptability, they are still evolving. The consensus was that we now have at least a year of lead time, during which hardware will improve, along three dimensions: latency, frame rate and pixel density, said Frank Vitz, Creative Director at Cryengine. In the meantime, software and content providers are discovering how to make the most of it all. But what new skills and capabilities need to be learned? The answer is not so straightforward, it transpires, as many of them (3D graphics, animation, behavioural design, data integration) are already available. Less straightforward is understanding how this palette of skills should be integrated. In mobile and web development for example, User Experience (UX) is a hot topic. Makes sense — the best apps are those which get things right by the user, offering potentially complex functionality and services in simple, accessible ways. Virtual Reality adds extra dimensions (quite literally) to the notion of experience. Not only is the environment immersive but it is also non-linear. Whereas most web sites and indeed, mobile apps tend to operate on a tree-walk basis (where you drop down a menu level then go ‘back’ to the main menu when done), VR removes this constraint. From a construction perspective, this changes the game. A mobile or web team might have a UX guy, an adjunct who can add a layer of gaily coloured iconography to an app, as UX is just one thing to get right. In VR however, the experience — VX if you will — is everything, and needs to sit at the centre of the project. As a consequence, many of my discussions at #VRWC were less about individual skills, and more about how to build the right skills mix into tight, multidisciplinary teams that can make the most of what VR has to offer. “You can’t just put out any old content and hope it will do well,” said Ben Trewhella. “Unless you are delivering an enhanced service, then what is the point?” concurred Rick Chapman, high tech sector specialist at Invest Bristol & Bath, who used the evolution of 3D techniques in film as an illustration. “The first 3D films used 3D as a gimmick. Avatar, whatever you think of its plot, was conceived and filmed for 3D.” Delivering VR-first experiences is a real, and potentially new, skill. The idea that VR is about storytelling came up repeatedly: it appears that holding someone’s attention in an immersive environment is tantamount to telling a good story, and anecdotal evidence suggested that those working at the leading edge of VR are also the better storytellers. This takes the conversation beyond base skills to how they should be harnessed. “Yes, you need the right mix of capabilities, but you also need empathy, you reed rapport, you need to understand charisma,” said Rick. “Consider — language is a capability, but with charisma and rapport you don’t need to be so reliant on verbal acuity.” This is not simply a message for design agencies, gaming companies and animation studios. If VR is to become mainstream, larger companies keen to engage better with their customers, from retailers to manufacturers, need also to welcome VR into the core of their customer engagement strategies. This means considering the impacts on the relationship between IT, marketing, sales and service and indeed, HR and recruitment. Getting the virtual experience right may become as much a symptom, of an organisation’s depth of understanding of its audiences and how they want to engage, as a cause of any resulting business value.

Read More...
posted 22 days ago on gigaom
2016 is the year many thought leaders in tech space are urging caution, expecting markets to cool drastically and urging startups to stay afloat by minimizing burn rate. Yet at the same time, the hardware industry is the fastest growing sector in the market, with investment up 30x since 2010. At this important precipice, what does the future hold for hardware companies? To better understand where the hardware industry opportunities are, what are perceived as the greatest challenges, and what it means to be a hardware founder today, we surveyed over 200 hardware companies and uncovered a lot of interesting information. Here are the highlights. Hardware Companies are Working to Build Products Faster In our report, we found on average most companies budget one to three months to build a functional prototype. Similarly, the majority of companies budget just three to six months to go from functional prototype to production. If you’re not familiar with hardware development lifecycles, just know that this kind of schedule is incredibly fast (and ambitious) compared with what was possible just five years ago. Hardware startups are increasingly seeking to become leaner in order to get to market faster and maximize capital investment. But while companies are working hard to be lean and build faster, the outcomes don’t always match expectations. Data shows that about four out of five VC-backed crowdfunding projects were late in 2014, and of the late projects (total cohort of 91 companies), 30 percent still hadn’t shipped in Q1 2015. Hardware companies setting ambitious schedules to get to market faster, and that’s fantastic and important, but there are clearly still obstacles in the way preventing companies from building as fast as they’d like to. What are these obstacles and how can we overcome them? Well, there are many, and I won’t mention them all in this post, but one of the major ones we’re focusing on at Fictiv is prototyping speed. Iterating on a physical product is inherently slower than iterating on a digital product, but if we can help companies to iterate daily vs weekly, that’s a huge step forward. Hardware Companies Seek Access to Better Tools One of the key factors that has contributed to massive growth in the hardware sector is an increase in the number of tools available to hardware companies for prototyping and development. We asked companies which tools they leverage in the development of their products and saw that 91% of companies use 3D printing, 58% use Breadboards, 51% use Arduino, and much more. (Honorable mention goes out to the tried-and-true duct tape, used by 46% of survey takers!) On the design side of things, there are a large variety of CAD programs available, but according to our results, Solidworks still reigns supreme, used by 70% of our survey takers. While there’s been a big uptick in the number of tools available, we need to continue to teach a wider audience how to use these tools most effectively. Arduino and Adafruit, for example, are doing a fantastic job educating people on the electronics side, Dragon Innovation is teaching young companies how to work with manufacturers in China, and on our blog we’re educating engineers and designs on how to prototype on the mechanical side of things. However, access to tools is not enough to make a successful hardware company—we need to document and decodify the knowledge around how to best use these tools and manufacture products at scale. Raising Capital is Top of Mind We polled companies on the greatest challenge in bringing a successful product to market and 28% said funding & resources was #1. And they’re not alone—this feeling is being echoed by thought leaders across the venture capital space. For example, Mark Suster, partner at Upfront Ventures, cautions: “I suspect 2016 will be the year that the more heated private tech markets cool.” Similarly, Fred Wilson, co-founder of Union Square Ventures, recently projected that “Markdown Mania will hit the venture capital sector as VC firms follow Fidelity’s lead and start aggressively taking down the valuations in their portfolios.” In response to VC’s urging caution this year, minimizing burn rate and staying lean is the mantra for hardware startups in 2016. The good news is that hardware is still the fastest growing sector in the industry and investment has been increasing at astounding rates: Investment in hardware is up 30x since 2010 and venture capital dollars in IoT have gone from $1.8 billion in 2013 to $2.9 billion in 2014 and $3.44 billion in 2015. To stay lean, hardware companies should consider burn rate and optimize for speed in the prototyping stage of development. Often we see cost-conscious startups skimp on up-front costs rather than considering the cost of wasted time, which ultimately comes down to burn rate (people are your biggest expense). So every time you order a 3D printed part, for example, the true cost of that part is really (part cost + (lead time x daily burn rate)). Main Takeaways The evidence from our State of Hardware Report points toward incredible potential for the hardware industry. More and more companies are building innovative products, we have better tools and technologies for prototyping, and the community is strong and passionate about open-source knowledge. But we still have a ways to go before hardware development can truly to accessible to everyone. We hope this snapshot of information points the community in the right direction to understand how to make hardware universally accessible so we can continue to build better tools and resources for truly democratized hardware development.

Read More...
posted 23 days ago on gigaom
Your average security operations center is a very busy place. Analysts sit in rows, staring intently at computer monitors. Cybersecurity alerts tick past onscreen—an average of 10,000 each day. Somehow, the analysts must decide, in seconds, which of these are false alarms, and which might be the next Target hack. Which should be ignored, and which should send them running to the phone to wake up the CIO in the middle of the night. It’s a difficult job. The alerts are false alarms the vast majority of the time. Cybersecurity tools have been notoriously bad at separating the signal from the noise. That’s no surprise, since the malware used by hackers is constantly mutating and evolving, just like a living thing. The static signatures that antivirus software uses to detect them are outdated almost as soon as they are released. The problem is that this knowledge can cause a kind of numbness—and make tech teams slow to act when cybersecurity software does uncover a real threat (a problem that may have contributed to the Target debacle). Luckily, a few government labs are experimenting with a new approach—one that starts with taking the “living” nature of malware a little more seriously. Meet the new generation of biology-inspired cybersecurity. Sequencing Malware DNA The big problem with signature-based threat detection is that even tiny mutations in malware can fool it. Hackers can repackage the same code again and again with only a few small tweaks to change its signature. The process can even be automated. This makes hacking computers cheap, fast, and easy—much more so than defending them. Margaret Lospinuso, a researcher at Johns Hopkins University’s Applied Physics Laboratory (JHUAPL), was pondering this problem a few years ago when she had a brainstorm. A computer scientist with a lifelong interest in biology, she was aware that programs for matching DNA sequences often had to ignore small discrepancies like this, too. What if she could create a kind of DNA for malware—and then train a computer to read it? DNA maps out plans for complex proteins using only four letters. But CodeDNA uses a much longer alphabet to represent computer code. Each chunk of code is assigned a “letter” depending on its function—for example, a letter A might represent code that opens a certain type of file, while a letter B might represent code that opens a server connection. Once a suspicious computer program is translated into this type of “DNA,” Lospinuso’s software can then compare to the DNA of known malware to see if there are similarities. It’s a “lossy technique,” says Lospinuso—some of the detail gets scrubbed out in translation. However, that loss of detail makes it easier for CodeDNA to identify similarities between different samples of code, Lospinuso says. “Up close, a stealth bomber and a jumbo jet look pretty different. But in the distance, where details are indistinct, they both just look like planes.” The resulting technique drastically cuts down on the time analysts need to sort and categorize data. According to one commercial cybersecurity analyst, the similarities CodeDNA found in two minutes would have saved him two weeks of hard work. But the biggest advantage of CodeDNA  is that it won’t be fooled by small tweaks to existing code. Instead of simply repackaging old malware, hackers to build new versions from scratch if they want to escape detection. That makes hacking vastly more time-consuming, expensive, and difficult—exactly how it should be. How to Build a Cyber-Protein Lospinuso’s team built CodeDNA’s software from scratch, too; it’s different from standard DNA-matching software, even though they implement the same basic techniques. Not so with MLSTONES, a technology developed at Pacific Northwest National Laboratory (PNNL). MLSTONES is essentially a tricked-out version of pBLAST, a public-source software program for deciphering protein sequences. Proteins are constructed from combinations of 20 amino acids, giving their “alphabet” more complexity than DNA’s 4-letter one. “That’s ideal for modeling computer code,” said project lead Elena Peterson. MLSTONES originally had nothing to do with cybersecurity. It started out as an attempt to speed up pBLAST itself using high-performance computing techniques. “Then we started to think: what if the thing we were analyzing wasn’t a protein, but something else?” Peterson said. The MLSTONES team got a bit of encouragement early on when their algorithm successfully categorized a previously unknown virus that standard anti-virus software couldn’t identify. “When we presented [it] to US-CERT, the United States Computer Emergency Readiness Team, they confirmed it was a previously unidentified variant of a Trojan. They even let us name it,” Peterson said. “That was the tipping point for us to continue our research.” Peterson says she is proud of how close MLSTONES remains to its bioinformatics roots. The final version of the program still uses the same database search algorithm that is at the heart of pBLAST, but strips out some chemistry and biology bias in the pBLAST software. “If the letter A means something in chemistry, it has to not mean that anymore,” Peterson says. This agnostic approach also makes MLSTONES extremely flexible, so it can be adapted to uses beyond just tracking malware. A version called LINEBACKER, for instance, applies similar techniques to identify abnormal patterns in network traffic, another key indicator of cyber threats. A Solution to Mutant Malware Cyberattacks are growing faster, cheaper, and more sophisticated. But all too often, the software that stops them isn’t. To secure our data and defend our networks, we need security solutions that adapt as fast as threats do, catching mutated malware that most current methods would miss. The biology-based approach of CodeDNA and MLSTONES isn’t just a step in the right direction here—it’s a huge leap. And with luck, they will soon be available to protect the networks we all rely upon.. With contribution by Nathalie Lagerfeld of Hippo Reads.

Read More...
posted 23 days ago on gigaom
Businesses large and small have turned to VoiP, Videoconferencing and many other IP enabled communications platforms to enhance collaboration and speed the decision making process. However, few consider the security implications of conducting meetings over internet connected devices and may be leaving themselves open to eavesdroppers at best, corporate espionage at worst. Those technologies, which include, VoIP, Videoconferencing, hosted webinars, and IP based communications platforms have transformed the way businesses communicate; Creating a paradigm shift that has resulted in the creation of the virtual meeting place / virtual conference room. Yet, for all of the productivity created, a deep dark secret lingers in the shadows – a secret that can be summed up simply as who can eavesdrop on those virtual meetings, or intercept the data shared. That secret culminates in to a real world threat, where the specter of corporate espionage, powered by IP based communications, can result in loss revenue and failed projects. Simply put, securing all forms of communication should a major concern for any business entity looking to share confidential data or discuss intellectual property across the unfettered packets flying across the internet. After all, businesses spend countless dollars on firewalls, security appliances and other InfoSec technologies to protect files and prevent unauthorized access to corporate systems, yet it seems little thought is put into securing technologies that have become all too common, such as videoconferencing and hosted IP based conferencing platforms. To be effective, IP based conferencing has to be easy to use, easy to access and flexible enough to be reconfigured on the fly. What’s more, conferencing must be able to work across several different devices, ranging from smart phones to desktop PCs to dedicated IP conference room appliances. Simply put, if the platform makes things difficult for users, those users will attempt to go another route, such as an open or “free” system, further complicating the security picture. Therein lies the security conundrum of virtual meetings. How can IT professionals make it both easy to use and secure from data leakage? The answer to that conundrum lies with rethinking how users engage with their meeting platforms of choice. In other words, a conferencing system has to be both easy to use and easy to secure, two elements are normally at polar opposites of the communications equation. To that end, Video Conferencing Solutions Vendor pexip has launched Infinity, a hosted platform that combines ease of use with policy based enforcement to create secure virtual meeting rooms. The product accomplishes that by leveraging an external policy server, which allows administrators to define policies that enforce security rules based upon multiple factors, such as user identity, location, device and so forth. Of course, establishing identity is only the first part of the security equation. Here, pexip brings to the table some additional capabilities, such as assigning a temporary PIN to a particular meeting and then delivering that PIN via an RSA token, SMS, or other methodology so that two factor authentication becomes the norm for any conference. For example, with SMS, each time the policy server receives a meeting request, a dynamic PIN is generated (which is stored for 60 seconds), that PIN is then delivered to the meeting attendee using their assigned phone number, which the policy server can loop up in the directory. The attendee uses that pin as a part of the authentication to enter the meeting. There is a lesson to be learned here, security ideologies must flow down to even the most basic of corporate communications.

Read More...
posted 24 days ago on gigaom
“We think mobile first,” stated Macy’s chief financial officer Karen Hoguet, in a recent earnings call with financial analysts. A quick glance at the US department store chain’s 2015 financial results explains why mobile technologies might be occupying minds and getting top priority there. Sales made by shoppers over mobile devices were a definite bright spot in an otherwise disappointing year for the company. Mobile revenues more than doubled, in fact, thanks to big increases in the number of shoppers using smartphones and tablets not only to browse, but also to buy. So it’s no surprise that Macy’s hopes to maintain this trend, by continuing to improve the mobile experience it offers. In the year ahead, Hoguet explained, this ‘mobile first’ mindset will see Macy’s add new filters to search capabilities, clean up interfaces and fast-track the purchase process for mobile audiences. Other consumer-focused organisations are thinking the same way and the phrase ‘mobile first’ has become something of a mantra for many. One of its earliest high-profile mentions came way back in 2010, in a keynote given by Eric Schmidt, the-then Google CEO (and now Alphabet executive chairman), at Mobile World Congress in Barcelona. “We understand that the new rule is ‘mobile first’,” he told attendees. “Mobile first in everything. Mobile first in terms of applications. Mobile first in terms of the way people use things.” The trouble is that, for in-house development teams, a mobile-first strategy still represents something of a diversion from standard practice. They’re more accustomed to developing ‘full size’ websites for PCs and laptops first, and then shrinking these down to fit the size, navigation and processing-power limitations posed by mobile devices. The risk here is that what they end up with looks like exactly what it is: a watered-down afterthought, packing a much weaker punch than its designed-for-desktop parent. A development team that has adopted a mobile-first strategy, by contrast, will start by developing a site for mobile that looks good and works well on small form factors, and then ‘work their way up’ to larger devices, adding extra content and functions as they go. That approach will make more and more sense as more ‘smart’ devices come online and the desktop PC becomes an increasingly minor character in our day-to-day lives. Take wearables, for example: many CIOs believe that headsets, wrist-mounted devices and the like hold the key to providing workers with relevant, contextual information as and when they need it, whether they’re up a ladder in a warehouse or driving a delivery van. Developing apps for these types of devices present many of the same challenges associated with smartphones and tablets: minimal screen real estate, limited processing power and the need to integrate with third-party plug-ins and back-end corporate systems. Then there’s a lack of standardised platform for wearables to consider, meaning that developers may be required to adapt their mobile app to run on numerous different devices. For many, it may be better to get that hard work out of the way at the very start of a project. In a recent survey of over 1,000 mobile developers conducted by InMobi, only 6% of respondents said they had created apps for wearables, but 32% believe they’re likely to do so in future. The same rules apply to a broader category of meters and gadgets that make up the Internet of Things, from meters for measuring gas flow in a utilities network, to products for ‘smart homes’, such as the Canary home-monitoring device, to virtual reality headsets, such as Samsung’s Gear VR, as worn by attendees at Facebook CEO Mark Zuckerberg’s keynote at this year’s MWC. As the population of ‘alternative’ computing devices grows, developers will begin with a lean, mean mobile app, which functions well despite the constraints of the platform on which it runs, having made all the tough decisions about content and function upfront. Then, having exercised some discipline and restraint, they’ll get all the fun of building on top of it, to create a richer experience for desktop devices. More importantly, they’ll be building for the devices that consumers more regularly turn to when they want to be informed, entertained or make a purchase. In the US, digital media time (or in other words, Internet usage) on mobile is now significantly higher at 51% than on desktop (42%), according to last year’s Global Internet Trends Report by Mary Meeker of Silicon Valley-based venture capital firm Kleiner Perkins Caufield & Byers (KPCB). In other words, developers should go mobile first, because that’s what we consumers increasingly do.   Picture Credit: Farzad Nazifi

Read More...
posted 29 days ago on gigaom
This article is the second in a series of six. It is excerpted from Not Everyone Gets a Trophy: How to Manage the Millennials by Bruce Tulgan. A senior private equity managing director told me of a parent calling to complain that her son was working too many hours. I asked how he reacted to this call. “I just listened and tried to be polite. I didn’t tell her that her son was going to make ten thousand dollars less for every minute she kept me on the phone. But I did the math in my head.” He went on, “This is ridiculous. For one thing, my parents never in a million years would have considered calling my boss when I was in my first job out of college. I can’t even imagine that. They didn’t even know my boss’s name. And I would have been mortified if my boss got a call from my parents.” It’s become almost cliché to say that the Millennial generation is over-parented. But they are. And that is a fact with which managers today must grapple. “This is an outrage,” some managers say, “I shouldn’t have to deal with their parents at all.” On the flip side, some managers simply accept that their young employees will be accompanied and assisted by their parents throughout the early stages of their working lives. I don’t think you should accept that. You hired the employee, not the parents. But you do have to deal with it. One nurse manager on a very busy hospital floor told me, “My approach is simple: sink-or-swim time now, kids. Just let the real world sort them out.” The problem is that if you take a sink-or-swim approach with Millennial employees, they are likely to sink; or go to the shallow end and play; or swim off in their own direction; or get out of the pool, walk across the street, and go work for your competition. And when you hire a replacement, that person is likely to bring his or her parents along too. The irony is that if you hire a Millennial who is not close to his or her parents, you may be sorry. Among today’s young workers, those who are closest to their parents will probably turn out to be the most able, most achievement oriented, and the hardest working. In my seminars, I tell managers that the way to deal with the over-parenting problem is to take a strong hand as a manager, not a weak one. Your Millennial employees need to know that you know who they are and care about their success. You need to make it a priority to spend time with them. Guide them through this very difficult and scary world. Break things down for them like a teacher. Provide regular, gentle course corrections to keep them on track. Be honest with them so you can help them improve. Keep close track of their successes no matter how small. Reward the behavior you want and need to see, and even negotiate special rewards for above-and-beyond performance in very small increments along the way. When I describe this approach at seminars, at least one manager will remark, “This sounds a lot like parenting. Are you saying that we should manage these young upstarts as if we are their parents?” I’m afraid the answer I’ve come to is yes, at least sort of. Let’s put it this way. You can’t fight the over-parenting phenomenon, so run with it. Your Millennial employees want it. They need it. Without strong management in the workplace, there is a void where their parents have always been. Step into the void. Take over the tutoring aspects of the parental role in the workplace without taking over the emotional part (at least mostly). Do be careful, and don’t get carried away. The worst thing you can possibly do with Millennials is treat them like children, talk down to them, or make them feel disrespected. Millennials are used to being treated as valued members of the family, whose thoughts and feelings are important. Remember, Millennials have gotten more respect from their parents and elders than any other generation in history. I call this approach ‘in loco parentis management.’ In loco parentis, a Latin term that means “in the place of a parent,” typically is used to refer to the position of an institution (usually a school) charged with the care of a minor in the absence of the minor’s parent. Here’s what this means: Care about your young employees. Don’t pretend to be their best friend. Give them boundaries and structure. Help them keep score. Negotiate special rewards in very small increments. About the Author Bruce Tulgan is an adviser to business leaders all over the world and a sought-after keynote speaker and seminar leader. He is the founder and CEO of RainmakerThinking, Inc., a management research and training firm, as well as RainmakerThinking.Training, an online training company. Bruce is the best-selling author of numerous books including Not Everyone Gets a Trophy (Revised & Updated, 2016), Bridging the Soft Skills Gap (2015), The 27 Challenges Managers Face (2014), and It’s Okay to be the Boss (2007). He has written for the New York Times, the Harvard Business Review, HR Magazine, Training Magazine, and the Huffington Post. Bruce can be reached by e-mail at [email protected]; you can follow him on Twitter @BruceTulgan, or visit his website.

Read More...
posted about 1 month ago on gigaom
Bob Lord, the former AOL president whose ascent to AOL CEO was derailed by the sale to Verizon, is returning to his Razorfish digital roots and taking the reins of digital at IBM. In an indication of the time we are in, the news was first broken on Twitter, by IBM’s David Kenny: Arik Hesseldahl reports, In a statement circulated to IBM employees, the company said Lord will “accelerate and scale all aspects of IBM’s digital presence, operations and ecosystem,” and will run its its digital platform, digital sales and marketing and its developer ecosystem. Chief Digital Officers are often called in to create a digital transformation, not just running digital operations. I guess it’s clear that IBM needs such a transformation, in order to get more agile in just about the most rapidly shifting market in the world: enterprise technologies. I’m going to have to try to connect with Lord, and see what his charter actually is. originally posted on [email protected]  

Read More...
posted about 1 month ago on gigaom
I used to think bootstrapping was unsustainable, then a client in the cloud changed my mind. I used to treat bootstrapping as a joke, likening startups that “bootstrap growth” to new restaurants that keep talking about their first Zagat listing while still having only plastic displays of food in the kitchen. Then I took on sales-and-marketing startup Agile CRM as a content strategy client. Being a part of cloud-based app’s core team since before Agile CRM’s public beta launch has completely changed my opinion on bootstrapping, and not just because we passed the sacred seven-figure revenue mark last year. I used to see investors and VC firms as market soothsayers, as if they somehow understood business better than the businesses themselves. Now I know that bootstrapping builds better SaaS apps. Here’s why. Pitching Customers Instead of Investors With a bootstrapped SaaS app, you pitch your customers instead of investors. And guess what? You pitch them every day. That daily engagement and interchange that enable bolder innovation, from the app to the marketplace and back again. When I first started working with the Agile team, we were our own best customer, if only because we were the only customer. The core development team had designed the first version of the app to solve immediate, real-world problems faced by their first SaaS app, ClickDesk, when it started scaling at an unprecedented rate. They couldn’t find a single affordable, extensible, integrated option for sales and marketing automation to track lead behavior and engage contacts throughout the entire customer lifecycle. Fed up with overpriced software that seemed to be aimed only at enterprise users with unlimited budgets, they decided to build a solution themselves and voila, Agile was born. It was just an internal solution at first, but that would soon change. In my opinion, the most innovative part of founder Manohar Chapalamadugu’s million-dollar vision in leading Agile CRM (and this was there from the very beginning) has been his emphasis on building an all-in-one solution focused on sales and marketing processes, rather than just standalone features. It’s a bold vision, encompassing everything from call automation and online scheduling to sales gamification and automated email nurturing. With the decision to bootstrap growth from the beginning, input about these processes has continued to come directly from customers. Something special happens when you daily pitch customers and listen closely to their response. Agile now has almost 1,300 ideas posted by customers on UserVoice, with over 200 of those ideas completed or in progress. If the CRM had taken outside investment, I think it’s unlikely that features such as the in-app landing page builder with integrated lead magnets would have been able to evolve naturally on top of an already extensive feature set. There would have been too many constraints. An Ongoing Conversation Technically-skilled support staff are one of the core reasons that bootstrapped SaaS companies create better products in the long run. Many of Agile’s customer testimonials speak of real campaigns and successes, and those quotes come not from anonymous review websites but from actual conversations with team members in India. A brief word of advice. Whether you call them customer success agents, sales support staff, customer happiness rockstars, or a new title we haven’t even heard yet, let me emphasize two things: 1) They need to understand customer wants and needs (ie. it’s better to have one technically-skilled success agent than three with limited knowledge or experience of the actual product and industry); and 2) They should be some of your first hires. Just because you’re bootstrapping, that doesn’t mean you should skimp on support. In fact, the opposite is true. I’ve been continually impressed by the dedication and responsiveness of Agile’s sales and support staff to customer wants and needs, and as I’ve learned more about bootstrapped companies with exceedingly high customer satisfaction ratings, I’ve noticed that this dedication to customer success goes hand-in-hand with smart bootstrapping. Aha!, the (totally bootstrapped) visual roadmapping app for product managers, stands out in particular with their decision to forego salespeople in favor of customer success. Failure is the Ultimate Motivation Bootstrapping a SaaS app isn’t about the choice between having weekly investor calls (or shareholder meetings) or weekly calls with your early adopters. It’s bigger than that. As Ryan Shank of (totally bootstrapped) mHelpDesk has written, bootstrapping is about building “an empire…one customer at a time.” We’ve already discussed the importance of customers. Now let’s shift focus to that idea of building “an empire.” The problem with empires is that eventually most of them fail. Once a SaaS company decides to bootstrap their own growth, there’s a shift in perception regarding their own product. Maybe this is true for other types of apps, too, but with software-as-a-service I’ve noticed that the shift is much more dramatic, maybe because the rate of micro-focused iteration is so high, as are the possibilities for large-scale changes, both on the front and back ends of your product. It’s terrifying, but it’s also exhilarating. Updates happen automatically and customers are instantly engaged with them. Bootstrapped SaaS apps can create more dynamic products because (if they’re successful) they embrace the possibility of failure, using it as motivation for streamlining their product and constantly making small enhancements, too, such as cleaning up front-end code and improving speed in as many ways as possible, like for one particular feature in one particular mobile browser. As Shank notes, having a smaller amount of cash on hand also demands a certain discipline and focus. How will you use that money? Will you build an app customers love, or will you create another pitch deck for investors?

Read More...
posted about 1 month ago on gigaom
There’s a soul-crushing moment in The Iron Giant when [spoiler alert] the alien robot chooses to save humans from an atomic bomb. The 1968 story by British Poet Laureate Ted Hughes presents one of the archetypal intersections of technology and humanity. It was science fiction then but, in the near fifty years that have since passed, the delta between tech and humans has narrowed. Today we live in a world where technology is closing in on what may be our most complex asset: emotion. And while the long-term impact unleashes a provocative range of possibilities, the more immediate effects are starting to be seen in customer engagement. Emotion isn’t a new frontier in business, of course; sentiment analysis and emotional branding have been in practice long before they were formalized. Focus groups date at least as far back as World War II and Mad Men fans will likely recall Draper’s tryst with consumer-research (and consultant Faye Miller…) And, of course, as the 20th century progressed, technology joined customer insight’s analog tool sets. But it’s only more recently that tech-powered emotional analytics have really stepped into the spotlight. What’s driving tech’s emo pursuits? There is a certain inevitability to it all; for years now, artificial intelligence has made its way into countless sci-fi narratives, laying out a trajectory of sorts for innovation. But a more practical driving factor is the business case—the seismic shift in consumer behavior (thanks largely to on-demand content and mobile devices) has challenged brands by turning neatly defined channels and dayparts into an always-on free for all.  And while the ability to reach consumers anywhere/anytime sounds compelling on the surface, try accessing a highway without a designated on ramp. Without a construct for when to engage consumers, you need to be a lot smarter about making your move. How is it done? Fortunately, the same innovations that have muddied the waters of engagement offer a path to clarity. Mobile devices—the digital appendages that we eat, sleep and everything with—offer data on consumer needs, wants and, increasingly, emotions, all of which can be leveraged for targeting and overall strategy. Microsoft’s Project Oxford’s work, along with Apple’s recent acquisition of Emotient, have triggered recent buzz around facial recognition technology, while MediaCom’s announcement that they would use emotional tracking via facial detection in planning suggests that this is more than buzz. There are also a growing number of offerings that use biometric feedback (like body temperature, sweat and heart rate) to gauge emotional response. Innerscope Research and Lightwave offer such technology, the latter recently partnered with 20th Century Fox to measure emotional reactions to The Revenant. Biometric data was also (not surprisingly) a topic of discussion at SXSW, with companies like Under Armour, Microsoft and Samsung coming together to discuss how it can be used to make marketing smarter. It’s an insight upgrade for marketers The data available through these various technologies isn’t simply a substitute for the traditional practice of marketing-by-assumption—it transcends it by providing marketers with a much more granular and actionable set of insights with which to make marketing decisions. Further, what’s different today than say, 20 years ago, is that the same sort of technology that powers martech platforms is also available in consumer products. Smartwatches and fitness wristbands and bras are opening the door to a more emotionally-aware exchange between consumers and brands. As the technology proliferates, marketers have the opportunity to gain real-time insight into consumer’s emotional state at scale. (Focus groups start looking quaint, don’t they?) Is it too creepy? The response to this question often hinges on the value exchange—in other words, consumers are more likely to share information when they get something of value out of it. In the case of emotion tracking, the marketer’s endgame is to deliver more relevant and effective engagement opportunities by presenting messages or experiences that fit a specific moment and emotional mindset. As advertising becomes more attuned and responsive to consumer needs—offering information and utility instead of just taglines—it may evolve into something that feels more like a service than an imposition. (This can already be seen in mobile ads that offer features like store finders.) If it’s handled correctly on the brand, publisher and technology side (that is, with transparency), then the value exchange can work in the consumer’s favor. Of course, there are benefits beyond marketing, too. Samsung’s Look at Me app is designed to help autistic children improve communication skills by, in part, helping them decode emotions, while the recent Be Fearless campaign showcases the ability to use virtual reality technology to cure phobias. Similarly, Stanford’s Virtual Human Interactions Lab is exploring virtual reality’s potential for building empathy. A number of similar programs are in use by the likes of the UN and Amnesty International. In other words, there are a lot of feel-good things happening in the world of emo-tech. What next? What does the future look like for a world where technology is attuned to our emotions? From a customer engagement perspective, emotional analytics enables clearer insight into consumer needs and general receptivity. Giving marketers the ability to make smarter decisions about how and when they engage consumers can have a positive impact on the relationship between brands and consumers, which is timely given the tension around ad blocking and oversaturation. It’s also becoming easy to imagine a world where the information exchanged via, for example, wearables makes it possible to skip that intermediary message and allow brands to instantly respond to a current emotional need. Imagine getting a serotonin boost from your earbuds while riding the hyperloop to a lunch meeting in San Francisco? (This opens up a longer tangent about opting in, which we can save for another conversation.) Despite great strides in emotional analytics, marketers are still in the discovery phase—learning what data is available and figuring out how to most effectively interpret and act on that data. So, there’s still a way to go before we reach many of the possibilities. And, along that way, a fair amount of trust needs to be in place for emotional analytics to be effective. That’s trust in brand, of course, but also in technology. Consumers need to see that tech can interpret complex emotions with reasonable accuracy and then, based on that insight, take the most favorable action—whether it be delivering the most relevant brand message or, well, saving us from an atomic bomb. You know, the simple stuff.

Read More...
posted about 1 month ago on gigaom
I’ve been evaluating a long list of work management tools as part of the research for the Work Management Narrative report (see recent post, Work Management in Theory: Context). One issue that comes up a great deal is the integration with email, which is a common trigger for a user to create a task, as well as a means to communicate with other team mebers who may not be using the same — or any — work management tools. This post doesn’t look into how work management tools use email as a way to communicate with team member not using the work management tool: that’s a separate use case. I’m focusing on email as a parallel sort of communication, and one from which a great deal of tasks arise. There are a number of approaches to email integration, which I will categorize like this: Low or no integration: despite the ubiquity of email, and the obvious need to communicate to the wide, wide world through it (and email’s insatiable hunger to communicate with us, too) some vendors offer little or no support for the realities of email. Not good. Loose integration: some vendors have opted for a loose integration, often through bookmarklets or third-party connection services like Zapier and IFTTT. For example, Azendoo supports a Zapier ‘zap’ where gmails that I star become tasks in a specific project. Subsequently, the user can open Azendoo, and perhaps move the task to another project, add notes, fool with metadata (due dates, assignment, etc.). A bookmarklet — like Wrike‘s — accomplishes more or less the same thing. In either case, the connection is one-way, and the work management tool does not try to ‘handle’ email in a general way: the precipitating email is just a starting point for a task. At present, I think loose integration is the best approach. In-inbox integration: Some solutions — like Todoist (a team task management tool) and Sortd (ditto) — provide a Google Chrome extension so that when you are ‘in’ Gmail you can easily convert an email to a task (and add metadata, etc.) in a window while never leaving the Gmail context. This is a lot smoother than loose integration, especially for people who communicate through email a great deal. Also, clicking on a link back to an email makes it more of a two way solution. In-app email: Some tools aspire to replace the email client’s functionality altogether, basically pulling in all emails and implementing the services that emulate — at least in part — capabilities of email services. It is this last case that I want to zoom into in this post. I’ve tried at least two solutions in recent weeks that seek to bring email integration in-app: Fleep and ScribblePost. I had an exchange with the CEO of ScribblePost, Alon Novy, about his company’s model of email integration. One outcome was the following post, shared with him through the company’s support system. In that post I suggested a more sophisticated version of in-app email integration: Alon – I tried and rejected your competitor Fleep’s attempt to act as a email client. The hybrid failed for some of the same issues I have with your approach: I might have a number of other plugins or features that operate in the Gmail client that I can’t walk away from, like Google Tabs. If I have to undertake email hygiene in both Gmail and in the work management tool, that is an impossible cost. The design of an email client is distinct from that of a work management tool, and intended to meet a wide range of use cases, not just those related to work management. My bet is that the best approach will be to have a close coupling, but not a full integration of email in the work management tool, like your SP [ScribblePost]. On the work management side, some emails — those that are starred, or labeled in a specific way — would have a handle created, so that the email can be indirectly referenced and annotated: for example, comments can be added to the handle, or a task can be created as a follow-up to the email that would be attached to link to the email handle. I think that the email handle is a distinct type or object in the work management space, different from tasks, internal messages, and posts. An email handle is a specific example of a general notion: a handle to reference some info object principally or partially managed outside the work management solution. That could also hold for Twitter or Facebook messages, for example, or Salesforce contacts. At any rate, SP could implement a set of actions for email handles that fall into two groups: those that represent actions on the handle — like creating or deleting the handle, linking it to a task (as a special sort of attachment), sharing it, adding comments, moving a handle from one project to another, etc. — as opposed to actions on the email linked to the handle — like reply, forward, archive, and so on. I think such a two-faced approach covers the greatest number of use cases, including unforeseen ones. You might also benefit from a chrome plugin for Gmail, so that some (or perhaps even all) actions that users might want to perform vis-à-vis the intersection of email and SP could happen ‘in’ Gmail. For example, I might read an email and decide to start tracking this thread in SP, associate one or more tags with the handle, and assign a follow-up task to myself referencing the email along with some notes. I could then get back to other email, some of which never crosses over into SP. Note that the info handle concept lines up fairly directly with a platform play, obviously. I applaud Alon and his team for the innovative ideas they are developing in ScribblePost, and likewise the brilliant design of Fleep, both products which I will be reviewing in the upcoming Narrative. I’m sharing this to stimulate discussion around these ideas, and also (shameless plug) to demonstrate the sort of thinking that animates the report.

Read More...
posted about 1 month ago on gigaom
This is an excerpt of the upcoming report, Work Management Narrative, in which I will be reviewing around a dozen products, including Asana, Azendoo, Basecamp, Clarizen, Fleep, Flow, Liquid Planner, Mavenlink, Smartsheet, Trello, Work Front, Wrike, Zoho Projects and others. Work Management in Theory: Context Work management is a term that has emerged in recent years as task management tools were enhanced with various social communication capabilities, principally derived from design motifs from work media tools. This increase of capabilities — and the resulting overlap of work management capabilities with those of work media tools — means that trying to assess the trends that are prevalent  in work management really require stepping back. Today, there are a wide range of approaches to supporting cooperative work in the workplace, and they have many features in common. So, in many instances, groups or companies evaluating tools for  team cooperation may consider offerings that are very different in their underlying design, and require correspondingly different approaches to their use. The Lay of the Landscape Here’s a table that attempts to make sense of a variety of technologies that are used in business to support cooperative work. It is not exhaustive, but I hope it will clarify some of the distinctions between these classes of tools. At the same time, there is a great deal of overlap so some degree of confusion is inevitable. Today, there are a wide range of approaches to support cooperative work in the workplace, and they have many features in common. So, in many instances, groups or companies evaluating tools for team cooperation may consider offerings that are very different in their underlying design, and require correspondingly different approaches to their use.The primary distinction here is the degree of emphasis for task-centric versus message-centric tools. Those that we will focus on in this report are task-centric, even though there have to include some fundamental level of social communication to be considered work management tools. So for example, Todoist is a leading team task management tool, widely used in business. However, the tool lacks social communication aside from comments (‘notes’) associated with tasks: Todoist does not support messaging, discussions, activity streams, or ‘call outs’ (also called [email protected]). While tasks can be assigned to others by the task creator, there is no other way that users can reference each other, or ‘talk’. And at the least social level of task management, personal task management tools don’t allow even the most basic level of business-oriented task assignment. As a result, team task management tools are not covered in this report, although Gigaom may develop a report like this one for that market, at some time in the future. Work management tools share a lot of similarities with various message-centric work technologies. Note that I have divided the message-centric tools into two sorts: Follow centric — like Yammer, where the primary orientation of messaging is around following of message sources, and messages are primarily displayed in activity streams based on the user choosing who and what to follow. Chat centric — such as Slack, where the primary orientation of message is around chat rooms, or channels, and messages are principally displayed in those contexts when the user chooses to’ join’ or ‘enter’ them. Some work media tools provide a degree of  task management, although it may not be the primary focus of the tool. And, as a general case, products like Jive, Yammer, and IBM Connections have little or no native task management, relying instead on integration with third party solutions. Likewise, many leading work chat offerings, like Slack and Hipchat, don’t have native task management, also relying instead on integration with task management tools, like Asana and Jira. Lastly, the class of tools I refer to as workforce communications (like Lua, Avaamo, Fieldwire, and Sitrion One) have characteristics that are like those of work media and work chat tools, but are principally oriented toward communications management with an increasingly mobile contingent of the out-of-office ‘hard’ workforce, such as construction, retail and restaurant workers, field sales, security, and others. At the bottom tier of the table in figure 1 are tools that are not principally oriented toward business use, like personal task management (Todoist, and Google Tasks), social media (Facebook, and Twitter), and consumer chat apps (Facebook M, and WhatsApp). This are widely used in business contexts, although they aren’t geared for it. Note however that this doesn’t mean that they couldn’t be recast as team or work oriented tools, like the trajectory of Facebook for Work. There are other less-closely related work technologies that are also not investigated here, like curation tools, conferencing tools, and so called ‘productivity’ tools (like Microsoft Office 365, Dropbox Paper, and Google Docs/Sheets/Slides). These, again, are candidates for inclusion in another report. Next week, I will be posting another excerpt from the report. 

Read More...
posted about 1 month ago on gigaom
Gigaom’s CEO Byron Reese is launching a new conference, Gigaom Change, which is ‘a hands-on summit designed to help those leading business get a grasp on the dizzying amount of technical change occurring around us and gain the confidence to accelerate enterprise adoption’. I thought I’d ask Byron about the conference, and what his goals are. About Byron Byron Reese is an Austin-based serial entrepreneur, inventor, futurist, and historian who believes that technology will soon usher in a new golden age of humanity, where there will be no hunger, disease, poverty or war. He launched his first business while an undergraduate at Rice University and over the years has started several others.  Along the way, he became an award-winning author with the publication of Infinite Progress. He presently serves as the CEO of Knowingly Corporation, as well as the publisher of Gigaom. Byron Reese The Interview Stowe Boyd: Gigaom Change is a new event you’re orchestrating. What was your motivation? Was there an itch you just couldn’t scratch? Does the world need another tech conference? Byron Reese: Gigaom Change is something entirely new and different. There isn’t another conference quite like it. It is based on the idea that a series of technologies are all converging on us at the same time, and they promise to a cataclysmic impact on the world.  The seven technologies we will be looking at are artificial intelligence, virtual reality/augmented reality, robotics, human-machine interfaces, nanotechnology, cybersecurity, and 3D printing. I have noticed that business leaders everywhere are having trouble keeping up with these technologies.  Everyone knows the high-level basic concepts, but this growing complexity is strangling corporate foresight and slowing business productivity. If you think about it, it is no surprise that these technologies are so overwhelming. Humanity has only faced real change three times in the past: When we got speech 100,000 years ago, when we developed agriculture 10,000 years ago, and when we invented writing 5000 years ago. We are literally going to witness the fourth major change, when our technologies upend our society and its institutions.  Business leaders need to understand how to use these new technologies. SB: The event will be covering a broad range of topics, so how will that done? Seven parallel tracks? What’s the experience going to be like? BR: Gigaom Change is a single track event broken up into seven sections over two days. So all the attendees have the same conference experience. SB: How does Gigaom Change articulate with Gigaom’s research focus? BR: Gigaom’s research focus is exactly that: Helping our audience of executives understand the technologies they are confronted with. There are all kinds of research organizations covering the consumer space, there are plenty covering marketing departments and IT departments, but our unique focus  is serving business leaders confronted with the implications of emerging technologies on their business. SB: What should attendees anticipate as the takeaways from the event? BR: Gigaom Change is being built in such a way that we can dive into these seven technologies and understand three things about each one: 1) How we got to where we are, 2) How these technologies can be deployed today, and 3) Where it is all heading. The intent to give leaders the most practical yet thoughtful insight into why these technologies matter and the very tangible impact they will have, collectively, on their business, their industry and the society of which we are all a part. While it is a very practical conference in that regard, it is not going to be your typical “sit in your chair watching talking heads all day” kind of event. It is being built as an experience not just a conference that connects enterprise leaders with leading edge innovators.  If any attendees are ever served baked chicken, I have failed. Those wanting more information can contact Byron by email.

Read More...
posted about 1 month ago on gigaom
There’s a lot of chat about the dangers to employment being driven by technology — with both blue and white collar work being threatened, according to a report from the University of Oxford. But how much of this is future hype, and how much short-term reality? Here’s 10 reasons why nobody should worry about whether they will have something to do in the years to come: Because decisions are more than insights. We may be able to get a great deal of information from analytics, but there will often be the extra level of judgement that only a human can bring. This is as true in healthcare and politics, as in customer service and situation response. Because we have hair, nails and teeth. All of which require cutting, grooming and generally maintaining. The amount of time people spend having themselves looked after is as much in proportion to their wish to be looked after, as any hygiene factor. Because we ascribe value to human interaction and care. In hospitals or day care centres, schools or gyms, or indeed, in taxis and public services, nobody, young or old, wants to be cared for by a robot. Nor will they ever. Because we love craft. Robots have been part of assembly plants for many years now, and will continue to be. But we still love hand-crafted stuff. It may be possible to 3D-print a statue, but the merit of having something hand-made will sustain. Because we value each other and the services we offer. Our capabilities are open to exploitation, it was ever thus. But the fundamental nature of a value exchange — “I will do something for you, and you will recompense me” — remains a constant. Because we are smart enough to think of new things to do. Innovation, design, new thinking comes from people, not machines. And even if computers suggest new ways of doing things, it will be people that pick the ideas up and run with them. Because complexity continues to beat computing. Even as we harvest ever larger quantities of data via sensors and cameras, through algorithms and actions, the ability of computers to make sense of it all remains behind the curve. To get ahead requires brains. Because experience and expertise counts. A plumber told me that when push-joints were invented, his father was concerned there would be no more need for plumbers — he needn’t have worried. In this complex world, domain knowledge, earned over years, will retain its value. Because we see value in the value-add. If it is possible to produce a motorbike without manual intervention, it becomes a commodity — but then, the motorbike with customised artwork becomes the must-have item. Because the new world needs new skills. A wealth of potential opportunity exists for future employment, if only we knew what it was — from drone pilots to 3D print shops, from data brokerage managers to IoT farm designers. And beyond. The bottom line is that even as computer power increases, as we automate manual activities, we lose neither the desire, nor the propensity for work. We have evolved such that we see work as necessary: we derive satisfaction from doing it ourselves, and sharing the fruits of our labours. Production line jobs may come to be seen as a historical aberration, the temporary consequence of industrialisation with primitive technology. And some people, who have spent their lives working in one area, may find themselves needing to work in others. But while jobs may change, we face neither a future life of leisure, nor a world of depression and worthlessness. The final sentence of the Oxford University report states, “For workers to win the race, however, they will have to acquire creative and social skills.” Well, it just may be we already have such skills, and if only we weren’t slaves to the machine we might be able to make better use of them.

Read More...
posted about 1 month ago on gigaom
The Retail Business Technology Expo held earlier this month at London’s Olympia exhibition centre was as remarkable for what people didn’t talk about, as what they did. Zebra Technologies’ Mark Thomson hit the nail on the head. “Look around, at the stands — nobody is talking about omnichannel anymore,” he said. “Apart from a couple — and that’s just because they haven’t caught up yet.” Indeed. The ‘omni’ term did get a mention at the panel session on future-proofing the supply chain I hosted, but just the one. It’s not that omnichannel is less relevant; more that the discussion about channels is evolving, adapting to the perspective of an industry already re-orienting itself. Online and mobile are no longer new; they just ‘are’. And thinking about them as separate makes as much sense as making coleslaw without mayonnaise. “We’re seeing a shift towards customer centricity and a unified commerce approach, rather than a view that omnichannel is achieved,” explains Mark. That’s the principle: of course, the reality is a work in progress. It’s still a major issue for a retailer to oversee the customer journey as an individual flits from online investigation, to in-store evaluation, then to mobile purchase (and potentially, physical return), which benefits neither retailer nor customer. Technologies are far from seamless; it’s challenging to get it all working together, with the result that customers can bear the brunt, for example being asked for the same information over and again.  “The technology needs to get out of the way when the customer wants to buy,” says Ian Benn, Head of EMEA at Ingenico. Part of the response could be tokenisation, originally conceived for customer privacy protection but with a spin-off benefit of acting like a non-intrusive customer ‘cookie’ within the sales process. “Tokenisation lets customers roam from point of sale to m-commerce to e-commerce and back again; often in the same purchase,” Ian continues. Whatever the technological answer, according to my panellists, the trick is seeing what the whole of retail is about. In this complex, digitally augmented world, retailers can lose touch with the fact they are ultimately curators, managing the relationship between customers and their suppliers, and getting the right products to the right people. This may sound obvious but it needs to remain front and centre of the retail and logistics strategy, whatever tools are available to achieve it and whatever their ramifications. The other two key parts of any strategy are to deliver on foundation standards for data format and exchange, and to get the different parts of the business engaged with what such capabilities enable. Essentially, these two pillars provide a technology-up and a business-down response to meet retail’s fundamental goal. Consumers may have changed how they buy but the market remains wide open for those who can meet their needs, working across whatever ‘channels’ they choose to express them and along the entire route from supplier to warehouse, to home and beyond.

Read More...