posted about 1 month ago on gigaom
In this digitally enabled world, it’s easy to be distracted by the wealth of technology now available. Mobile apps and devices are transforming the nature of logistics, for example, with devices now being used to track consignments, plan and monitor routes. Features from identifying gas station locations to directly updating customers are becoming part and parcel of the logistical journey. It would be trite to suggest that such capabilities are inadequate, as they quite clearly bring a great deal of value. For example, a local business with which I am familiar has improved visibility on its delivery team, to the extent that it can respond better to customer complaints. On one occasion, mobile technology proved that a driver was driving under the speed limit through a village, when a complainant said he was not. At the same time however, the features mobile apps currently provide are largely aimed at the logistical process itself – as if it existed in isolation from what was being delivered, and more importantly, why. The transportation of goods has traditionally been considered independently, largely because it is so manually intensive and, in itself, complicated. As technology advances however, this, more isolated nature of logistics is changing. A simple, yet profound example is the click-and-collect service, in which a retailer offers delivery of an online purchase to a location of choice – a shop or an affiliated delivery point. It’s a great idea, making for a significantly improved customer experience — if it works. If it does not, the dream can quickly turn to nightmare. The key to success (or indeed, failure) is the ability to transmit clear information between the two most important components in the chain: the consignment, and its planned recipient — this could be a retail customer or, in the B2B case of spares management chain, a field engineer. Mobile devices remain a very important element, as they offer a window onto the logistical world. For this window to operate effectively however, back-end systems and tools (such as inventory systems, maintenance schedules and sales databases) need to exchange information in a fashion that appears seamless. No room exists for doubt, when it comes to whether a delivery has taken place. This may sound obvious, but it is not yet always the case. In one, anecdotal click-and-collect example, a customer went to a store to pick up a package, only to find it had already been returned due to a mismatch of delivery records. On another occasion, an order was cancelled as the product was no longer available, without updating the customer – who only found out when they arrived to collect it. Logistics simply cannot afford to make errors such as these, as they jeopardise its very rationale. On the upside, get things right and a number of new opportunities emerge — not least to differentiate the business in both B2C and B2B markets, but also to extend product ranges and squeeze that all-important operational efficiency. The threat is that incumbent logistics organisations may not have forever to get things right. Consider Uber: while it, and its competitors, have significantly disrupted taxi and private hire services, the company’s valuation is based on its potential as a delivery mechanism for all forms of physical delivery. “Uber isn’t valued at more than $50 billion because it’s a ‘taxi app’” explains Adrian Gonzalez, president of supply chain consulting firm Adelante SCM, but because, “Investors see Uber as a logistics company.” Despite such topics as 3D printing, self-driving vehicles and flying drones threatening to impact the delivery and receipt of products, these remain early days for logistics – and furthermore, the data integration points with other systems will remain the same, however a delivery takes place. So, this is certainly not the time to paint any disaster scenario. Rather, it is the right moment to get the basics of offering an integrated service right, in the knowledge that whatever comes in the future, it will only grow in importance.  

Read More...
posted about 1 month ago on gigaom
This article is the first in a series of six. It is excerpted from Not Everyone Gets a Trophy: How to Manage the Millennials by Bruce Tulgan. It seems that the vast majority of leaders and managers think Millennials have an attitude problem. But isn’t this always the case when a new generation joins the workforce? Doesn’t every new generation of young workers irritate the older, more experienced ones? At the early career stage of life, young people are just learning to break away from the care of others (parents, teachers, institutions) and taking steps toward self-sufficiency and responsibility. As they move into the adult world with the energy and enthusiasm—and lack of experience—that is natural at that stage, they are bound to clash with more mature generations. And yet as much as human experience—such as the rite of passage into the workforce—stays the same over time, the world doesn’t. What makes each generation different are these accidents of history that shape the larger world in which human beings move through their developmental life stages. So while every generation rocks the boat when they join the adult world, they also bring with them defining characteristics that alter the rules of the game for everyone going forward. Millennials’ “attitude” probably is not likely to go away as they mature; their high-maintenance reputation is all too real. Still, the whole picture is more complicated. Yes, Millennials will be more difficult to recruit, retain, motivate, and manage than any other new generation to enter the workforce. But this will also be the most high-performing workforce in history for those who know how to manage them properly. Meet the Millennial Generation Although demographers often differ on the exact parameters of each generation, there is a general consensus that Generation X ends with the birth year 1977. Most agree that those born between 1978 and 2000 belong in the Millennial Generation. But by our definition at RainmakerThinking, Inc., the Millennials come in two waves: Generation Y (those born between 1978 and 1989) and Generation Z (those born between 1990 and 2000). Gen Yers are today’s thirty-somethings, no longer the youngest people in the workplace, while Gen Zers are the newest new young workforce, those who are filling up the rising global youth tide in today’s workforce. Here’s the short story with the Millennial Generation: If you liked Generation Y, you are going to love Generation Z. If Generation Y was like Generation X on-fast-forward-with-self-esteem-on-steroids, Generation Z is more like the children of the 1930s… That is, if the children of the 1930s were permanently attached to hand-held super-computers and reared on “helicopter parenting” on steroids. Overall, the Millennials embody a continuation of the larger historical forces driving the transformation in the workplace and the workforce in recent decades. Globalization and technology have been shaping change since the dawn of time. But during the life span of the Millennials, globalization and technology have undergone a qualitative change. After all, there is only one globe, and it is now totally interconnected. Millennials connect with their farthest-flung neighbors in real time regardless of geography through online communities of interest. But as our world shrinks (or flattens), events great and small taking place on the other side of the world (or right next door) can affect our material well-being almost overnight. Nothing remains cutting edge for very long. What we know today may be obsolete by tomorrow. In a world defined by constant change, instantaneous response is the only meaningful time frame. Why are Millennials so confident and self-possessed, even in the face of all this uncertainty? One reason is surely that they grew up in and after the Decade of the Child. Gen Xers were the great unsupervised generation (they made the latchkey into a metaphor). But Millennials are the great over-supervised generation. In the short time between the childhood of Generation X and that of Millennials (especially Generation Z), making children feel great about themselves and building up their self-esteem became the dominant theme in parenting, teaching, and counseling. Throughout their childhood, Millennials were told over and over, “Whatever you think, say or do, that’s okay. Your feelings are true. Don’t worry about how the other kids play. That’s their style. You have your style. Their style is valid and your style is valid.” For Millennials, difference is cool. Uniqueness is the centerpiece of identity. Customization of the self is sought after with great zest and originality, through constant experimentation. In the world of the Millennials, the menu of selfhood options is extraordinary and the range of possible combinations infinite. For the Millennials, customization is the Holy Grail, and it has always been right there within their grasp. From the first day they arrive in the workplace, they are scrambling to keep their options open, leverage their uniqueness for all its potential value, and wrap a customized career around the customized life they are trying to build. About the Author Bruce Tulgan is an adviser to business leaders all over the world and a sought-after keynote speaker and seminar leader. He is the founder and CEO of RainmakerThinking, Inc., a management research and training firm, as well as RainmakerThinking.Training, an online training company. Bruce is the best-selling author of numerous books including Not Everyone Gets a Trophy (Revised & Updated, 2016), Bridging the Soft Skills Gap (2015), The 27 Challenges Managers Face (2014), and It’s Okay to be the Boss (2007). He has written for the New York Times, the Harvard Business Review, HR Magazine, Training Magazine, and the Huffington Post. Bruce can be reached by e-mail at [email protected]; you can follow him on Twitter @BruceTulgan, or visit his website.

Read More...
posted about 1 month ago on gigaom
Aging societies worldwide pose serious challenges to both healthcare delivery and how it is financed. The financial challenges are not only for the healthcare system, but also for families and caregivers who spend a significant amount of time and money caring for elderly.  Carers UK, a non-profit organization, has estimated that over 2 million people have given up jobs and over 3 million have cut back hours of work in order to care for sick or disabled elderly relatives. Gallup estimates that lost productivity due to caregiving amounts to over $25 billion annually in the US alone. These figures do not count the economic impact of chronic diseases and other issues that affect the elderly. Falls alone are estimated to cost nearly $15 billion/year and are a major cause of death due to sequelae (additional complications) after the fall. While the headlines on wearables tend to focus on athletes and those who are already in good health, the business case for wearables for managing issues that afflict the elderly, coupled with the time and financial constraints on their caregivers, may be even more robust, if the right solutions could be made available. A brief look at the current market for wearables reveals that solutions for the elderly are beginning to enter the market. Personal emergency response systems (PERS), which can be worn by the elderly in the event of a fall, have been available for many years. These devices allow the user to summon emergency assistance by activating a button on the device that is worn on the wrist or around the neck. The next generation of PERS, such as Artemis or Amulyte, has broader applications for safety while also considering fashion and the use of smartphones. Even more importantly, we need to harness sensors to predict the onset of conditions that may lead to falls and other health issues before they happen.  AgeWell Biometrics is a cloud-based platform that can connect to a broad range of wearable devices and enable healthcare professionals to evaluate stability of an individual and the risk of falls and other neurological issues. In addition, the market for remote monitoring technologies that utilize sensors to measure cardiac function, track illness symptoms (e.g. Samsung Gear Smartwatches) or track location while safeguarding privacy (CarePredict) looks set to grow. In the next several years we expect to see many of the expensive medical devices found in intensive care units become wearable sensors worn on the wrist or in clothing. This development will help health systems transition patients out of the hospitals (where they can be at risk of infection) and into the home sooner, hopefully for a speedier recovery. Other applications include focusing on conditions such as Parkinson’s Disease and treating hand tremors, or applications for those suffering from Alzheimer’s (Smart Sole). In order to realize the benefits of wearables for the elderly we will need to improve capturing and sharing this data with the right healthcare provider and/or caregiver, at the right time. However, advances made in recent years are on the right track to deliver solutions which will make a real difference, in terms of both cost and quality of life. Photo credit: Neill Kumar

Read More...
posted about 1 month ago on gigaom
Last month we touched on the dual role of IT managers as both internal support and tech influencers at GigaOm. It’s pretty much the same story at companies around the world, as are the core principles for doing business: maximize efficiency and productivity while reducing costs. HP sent us a shiny new toy to help us accomplish those tasks, and asked us to share our thoughts in a series of sponsored posts. You can check out our initial thoughts HERE, but now it’s time to talk tech. We set up our new HP Color LaserJet Pro MFP M477 and decided to show it no mercy. If we were going to really #ReinventTheOffice with HP’s help, we needed to go full Thunder Dome on that machine. In the corporate world, that pretty much means we printed a lot of stuff on a tight deadline. The task at hand: financial reporting packets. That translates to 8 full color pages times eight packets plus spiral binding, stuffing, and labeling all before the UPS driver arrived at 3 p.m. We started at 2. HP states their JetIntelligence toner cartridge technology is “engineered to make the printer print faster and enable higher page yields.” This is highly important when two-day versus overnight shipping is on the line (see need for cost effectiveness in paragraph one). As previously stated, we were pushing the limit by starting at 2 p.m., but we like living on the edge. As it turned out, we had a bit of time to spare: it took just under four minutes to print 64 pages. Printing the mailing labels went even faster. The packets were done so far in advance we vowed to procrastinate even longer next month. You might be thinking the quality had to go down for the pages to come out that fast, but you’d be wrong. The graphics looked great, even in full color, and the black and white was crisp. We were pretty pleased with round one. Over the next few weeks, it was business as usual at GigaOm, except it wasn’t. It was better. The MFP M477 kept right on printing at top speed, producing quality pages at an impressive clip. The first page out was particularly fast at under 10 seconds, under nine when printing black and white. Most jobs printed right around the top posted speed of 28ppm, which we loved. And changing the dreaded toner cartridge wasn’t very dreadful at all, as it turned out. The built-in toner indicators were accurate and prevented premature replacement, and when it was truly time for a new cartridge, the auto seal remover feature made it easy and efficient. No more fighting, cramming, and messy hands. One feature we were particularly interested in is the anti-fraud technology touted in the product materials. Printers are part of our network, as they are in most businesses, and if not protected, they can provide unsecured access for hackers. This is not good. To prevent unauthorized access, HP employs three security tactics to protect our data: HP Sure Start, which validates the BIOS code; whitelisting, which authenticates the firmware; and runtime intrusion detection, which monitors and detects anomalies in firmware and memory operations. Those are abbreviated descriptions of some rather complex processes, but you get the idea. Adding a faster printer to the fray might seem like an obvious way to improve productivity in business, but it’s not that easy. Speed is great, but not when it means sacrificing graphic integrity, increasing cost, or complicating procedures. The MFP M477 goes beyond faster printing and really does simplify many aspects of office life. Removing the headaches of paper jams, toner explosions, frequent cartridge replacement, watching pages trickle out at a snail’s pace, and fearing a security breech are all big steps forward. You might even say they’re ways to #ReinventTheOffice.

Read More...
posted about 1 month ago on gigaom
Jason Fried Jason Fried of Basecamp is only the most recent to come out stronglycondemning the hype around work chat, and perhaps, the leading protagonist in the market: Slack. He enumerates a short list of positives (4), and then a staggeringly long list of negatives (17). I will synthesize his points down to these: work chat is good for quick-and-dirty, once-in-awhile discussions, and for team building, but the costs are considerable, since work chat is tiring, obsessive, interruptive, and leads to focusing people’s attention on the near-term, while fracturing our concentration on what’s really important. Fried’s mantra is ‘real-time sometimes, asynchronous most of the time’, which I completely buy. I am also a big fan of his recommendation that people should break out of unproductive chat mazes, and ‘write it up’ instead. Long form writing can break the chain of opinionated chatifying, and lead to a basis for deliberative reasoning. Go read it. I’ll wait. Many of the problems that beset work chat in business contexts arise from social crowding, when the dynamics of small groups are constrained or sidetracked because too many people move into groups to participate, when they aren’t actually members of the set of people doing the work. But, as in other recent pieces about Slack (see Samuel Hulick’s Why I’m breaking up with Slack), Fried never explicitly discusses the sizes of the groups using work chat, and how group size may factor into the negatives these authors describe. My thesis is that work chat works best in the context of small teams, which I call sets, groups of less than 10 or 12. Many of the problems that beset work chat in business contexts arise from social crowding, when the dynamics of small groups are constrained or sidetracked because too many people move into groups to participate, when they aren’t actually members of the set of people doing the work. Sets are characterized by small group social dynamics. There is frequent and reciprocal communication, so a member can post a request for help and get a response quickly, for example. There is a greater degree of trust than larger groups, in general. There is a greater likelihood of strong interpersonal connection — strong ties — than out-of-set relationships. There are few who would advocate a massive chat room of 100,000 employees palavering with each other to steer a company, but we are making more of less the same mistake — social crowding — when we allow 25 people to argue product strategy in a Slack channel. It’s a difference only of scale, and the same error: applying a communication tool that does not work well at the scale of the social group. But if a set of nine marketing folks is joined by (invaded by?) a dozen out-of-set members in a Slack channel where the marketers are trying to get their work done, the dynamics can go sideways. There is greater noise in the channel as the interlopers raise questions, throw their opinions around, and take sides in discussions. This crowding is worse that the noise, since the ‘tourists’ can lead to a decrease in the benefits of tight, in-group dynamics, and a hollowing out of purpose and shared goals. So there are several threads that follow from social crowding: Social norms have to be expressly promoted to keep chat channel populations low, if they are going to be the site of effective team work. (Note: I mean the work done by teams, not the somewhat nebulous, rah-rah term on the posters in the lunchroom.) Chat is not the only sort of social mechanism that we should apply to work communications, and specifically, when we look at larger-then-set social groups there are better ways to communicate. We do much of our work as soloists and set members, but we are also members of larger scenes — groups of up to 150 more or less, made up of networks of sets. Effective communications at that level require more than — or other than — chat. Consider Fried’s suggestion toward a synchronous long-form ‘writing it down’ as just one example. This is a specific instance of the general issue of ‘work as a commons’. The folks that naturally most closely tied to some definable work activities — like our marketing team, above — should have the largest say in how their work is performed, and the decision-making about their work practices. That’s what they share in common. While those farther from that work — the freeloaders that are crowding the chat with their noise, interruptions, and influence — should be kept from the set’s workings if that interaction is negative. In the long run, vendors like Slack and its competitors will need to create a multi-scale suite of communications approaches that align with social groupings. Work chat may be best suited for much of what sets need, and other approaches — like we see in enterprise social networks (work media), work management tools, and workforce communications solutions — are likely to be better suited to work at the scene level, or the enterprise scale, the scale of networks of scenes, or spheres. There are few who would advocate a massive chat room of 100,000 employees palavering with each other to steer a company, but we are making more of less the same mistake — social crowding — when we allow 25 people to argue product strategy in a Slack channel. It’s a difference only of scale, and the same error: applying a communication tool that does not work well at the scale of the social group. Originally published at stoweboyd.com and workfutures.io on 8 March 2016.

Read More...
posted about 1 month ago on gigaom
Here’s a confession: I hate horror movies. It took me a while to realise that I didn’t actually have to watch them – some point after The Silence of the Lambs, I thought, nah. If I never see another horror film again, I will not feel in the slightest bit bereft. But here’s the other thing. It took me even longer to work out that horror movies are deliberately looking to make people scared. I thought it was just how I reacted – badly – but no, that’s the whole point. If a horror film isn’t turning you into a blubbering wreck who checks the fridge when you get home, in case someone is hiding inside… then it isn’t doing its job. Which begs the question – what do stories become when we start to become participants in their creation? That’s exactly the question which the guys at the strangely named Mashup Machine have set out to answer. Their first story is Scary Cabin, yeah, you already get the plot. Or do you? What happens when you, or anybody else, can change it? Clearly, Scary Cabin isn’t really my kind of story. I have a nasty feeling that my involvement would start with moving all sharp objects a good distance away, or simply getting the heck out of there (which is what the last person standing always seems to do – I want to be that person). But there is something in this. Stories evolve in their telling; they are embellished, refined. Teams can collaborate on plots, or simply nudge lead writers in a direction. Brainstorms can generate scenarios that have to be integrated, sometimes with great effect. And we can now do all of these things with the power of a globally motivated crowd. Whether or not Mashup Machine is the answer, remains to be seen. This isn’t the first attempt at user-defined stories, but it is coming at a time when a whole bunch of technologies are more readily accessible – the immersive nature of VR, for example, coupled with the machine learning power of the cloud. Behind it all is storytelling itself, which is evolving in parallel with the digitally enabled society we are creating. Since ancient times we have told stories in order to learn, to cope and survive. For better or worse, the stories we create digitally could tell us more about ourselves, or the people we are becoming, than we could ever read in a book.

Read More...
posted 2 months ago on gigaom
Since the arrival of the first consumer-bought smartphones, enterprise security has been under threat. That all-important chain of defense against security risks has been undermined by its weakest link, people, in this case by using non-standard devices to conduct business and therefore making corporate data vulnerable to attack. The alternative, to roll out company-issued mobile devices, has not been an easy path to follow. When historical market leader Blackberry lost its leading position in the market to Apple and Google’s Android, companies also lost a significant part of the ability to control corporate messaging and applications from a central point. From the perspective of the IT shop, the consequence has been fragmentation, which has undermined the ability to deliver a coherent response in security terms. While solutions such as Mobile Device Management have existed, they have been seen as onerous; also, some devices (in particular those based on Android) have been seen as less secure. Looking more broadly, many organisations have ended up adopting an approach in which corporate devices are used alongside personal equipment for business use. The genie of consumerisation is out of the bottle, say the pundits. But now devices exist that can deliver on an organisation’s mobile security needs, the question is, can it be put back? The answer lies in addressing the source of the challenge, which is not the device but the person using it. Human beings assess risk all the time, and indeed, we are very good at it. In the case of a mobile device for example, we are prepared to put up with a small amount of discomfort if it will get us the result we want: sending a message, say. If the discomfort is too great, we will assess other risks, such as, “What happens if I get caught using my personal phone?” If the answer is nothing, then the chances are that the behavior will continue. With this in mind, anyone deploying a mobile solution needs to consider two variables: the discomfort it causes, and the cost of avoiding the discomfort. Considering the discomfort first, the point of any mobile solution is to enable productivity. Different security features — such as encrypted data storage, separation of apps and so on — may be applicable to different business scenarios. Defining a solution appropriate for an organisation or group requires familiarity with the security features available on a device and the risks they mitigate. Better knowledge makes for more flexibility, reduced operational overhead and therefore increased probability of a successful deployment. An equal partner to product knowledge should be an understanding of the organisation concerned, the data assets to be protected and what constitutes their acceptable use. If a policy is in place, this may need to be reviewed: note that it needs to be signed off at the top of the organisation to be effective. Once a standard configuration has been defined, it will require testing. Too often, enterprise mobile security can fail “for want of a nail” — insufficient licenses on the RADIUS server for example, or lack of WiFi cover in areas where authentication takes place. Users need a solution that works from day one, or they will immediately lose confidence in it. Putting all these measures in place can help minimize discomfort, but the need to go hand in hand with measures to ensure that the capabilities cannot be circumvented. Note that we are talking about the organisation’s most important asset — it’s people — who will respond far better to inclusionary tactics than draconian tactics. At the same time as understanding why secure mobile working technologies are being deployed however, employees need to know that they need to act as a strong link in the chain, not a weak one. An Acceptable Use Policy should be enforceable, in that a staffer at any level’s card will be marked if they attempt to circumvent it. In addition, the genie should be given a clear timescale for getting back in the bottle. For example, in an ‘anything goes’ environment which mixes personal and corporate mobile equipment, individuals should be given a cut-off date following which corporate data access will only be possible via a secure device. A final question is about sustainability, that is, how to keep it all going? Reporting is important, with deprovisioning perhaps the most critical — it is one thing to know that resources have been allocated to the right people, but even more so is to know that any rights — and indeed devices — have been returned on change of role or exit from the company. The bottom line, and the most fundamental challenge, is that any shiny new corporate devices deliver on what they are supposed to do — in this case enabling mobile users to stay productive without compromising on corporate risk. Provide people with usable security they will not try to circumvent, and you avoid consigning devices to the desk drawer. If you’re interested in improving your business’s mobile security operations, join us for our upcoming webinar: Evolving Enterprise Security for the Mobile-First World. This webinar is presented by GigaOm’s Jon Collins, with sponsorship by Samsung. Register now for the webinar taking place on Wednesday, March 9 from 1 to 2pm EST.

Read More...
posted 2 months ago on gigaom
Excerpted from RainmakerThinking, Inc.’s white paper, The Great Generational Shift What is the Great Generational Shift? There is a “Great Generational Shift” underway in the workforce today. This is the post-Baby Boomer shift that demographers and workforce planners have been anticipating for decades. It is not only a generational shift in the numbers in the workforce, but an epic turning point. This is the final stage of a historic period of profound change globally and a corresponding transformation in the very fundamentals of the employer-employee relationship. The Numbers Problem: Workforce 2020 While there are always different people of different generations working side by side in the workplace, today there are as many as six different generations, depending on which demographic definitions one uses. The workforce is aging on one end of the spectrum and getting younger on the other. In the middle there is a gap, with the prime age workforce shrinking as an overall percentage of the workforce. Generations in the workplace in 2016. The oldest, most experienced people in the workplace, “pre-Boomers,” those born before the post-WWII “Baby Boom” began in 1946, are still greater than 1% of the workforce. The Baby Boomers (born 1946-64) are 30%, Generation Xers (born 1965-77) are 27%, and The Millennial Generation is 42%. Because both the Baby Boomers and the Millennials are such large generations with such long birth-year time spans by the broadest definitions, we have found it useful to split them each into first-wave and second-wave cohorts. Based on our model, here is a chart of the generations in the workplace today: The age bubble. On the older end of the generational spectrum, the workforce is aging, just as the overall population is aging. This is particularly notable in Japan, most of Europe, and North America. In North America alone, ten thousand Baby Boomers have been turning 65 every single day since 2011. The Boomers are filling up an “age bubble” in the workforce such that there are many more people at or near the ordinary age range for retirement. The exodus of the first-wave Boomers from the workplace – postponed for several years by the economic crisis that began in 2008 – is now swift and steady. By 2020 Boomers will be less than 20% of the Western workforce; older Boomers (born before 1955) will be less than 6%. What is more, Boomers who do remain in the workforce will continue trending heavily toward “reinventing” retirement and late-career-pre-retirement: Working less than full-time, often partially telecommuting, and often working nonexclusively for more than one employer. The youth bubble. At the same time, the fastest growing segment of the workforce is made up of those born 1990 and later, so there is a growing youth bubble on the younger end of the spectrum. The youth bubble is growing even faster in “younger population” regions of the world. But even in “older” North America, Europe, and Japan, the youth bubble in the workforce is rising much faster than in recent years because employers are once again hiring new young workers after several years of formal and informal hiring freezes resulting from the economic crisis. By 2020, second-wave Millennials (those born 1990-2000) will be greater than 20% of the Western workforce and another 4 – 5% will be made up of post-Millennials born after the year 2000. And in most of the world, the youth bubble will be much, much larger. The rising global youth tide will bring to the workplace radically different norms, values, attitudes, expectations, and behavior. Workforce 2020 – Remember “2020, 20, 20.” By the year 2020, the Western workforce will be made up of less than 20% Baby Boomers and more than 20% second-wave Millennials, (plus another 4 – 5% of post-Millennials). The rising global youth tide. The youth bubble is much, much larger in Africa, Latin America, and much of Asia. Second-wave Millennials are already, in 2016, greater than 45% in India, Mexico, Brazil, Indonesia, and Vietnam; in Nigeria greater than 60%. By 2020, in these younger parts of the world, those born 1990 and later will be more than 60% of the workforce. Considering the increasing globalization of the workforce, one important feature of the growing youth bubble is that it will be increasingly global, with a much greater percentage of the new young global workforce coming from outside of North America, Europe, and Japan.

Read More...
posted 2 months ago on gigaom
Amy Ingram, the persona of the x.ai virtual assistant — which is a program — got asked out every month last year. Originally posted on stoweboyd.com on 25 February 2016.

Read More...
posted 2 months ago on gigaom
Fragmentation Continues, and Software Budgets Growing Rapidly We’ve seen a massive change in the enterprise software market in the last dozen years and it can be described by four numbers (20, 10, 12, and 4). Here is the summary of how the landscape has changed in the last 10 years: 20x increase in software vendors 10x the number of software products companies buy 12x the number of internal “buyers” inside companies 4x increase in budgets for software solutions (Note: This is from hundreds of surveys Siftery did of many of the largest software buyers. Siftery helps companies discover enterprise software they should be using. Disclosure: I am an investor and board chairman at Siftery). The high-level macro is that the world of software is getting way more crowded and way more complex. Buying software is daunting … but it is also REALLY impactful. The right software gives you a competitive edge. Software isn’t just eating the world … it can help you eat your competitors for brunch. Choosing the right vendors is really important for businesses to succeed … so we would expect companies to be spending much more time selecting vendors … and they have. The Four Numbers: 20, 10, 12, and 4 In the last dozen years, we have seen a software explosion. This is mainly due to software being easier to buy, integrate, deploy, and use because it no longer needs to be hosted on premises. This software is usually lumped together as SaaS (or DaaS or even some other name) … but the main take-away is that one does not need a really expensive on-premises integration and therefore it can be tried and used quickly and with less risk. When we talk about “software”, we’re not just talking about code you run inside your corporate firewall. And we’re not just talking about traditional SaaS products (like Salesforce, Workday, etc.) either. We’re also talking about data services your company consumes (like Bloomberg and Acxiom). And services that help market your business (like ad networks and Google search spend). And subscriptions you use to help your business (like LinkedIn and Glassdoor). 20: The number of software vendors has increased 20x in just ten years That’s right. There are twenty times the number of software vendors that there were just ten years ago. There are a lot of causes of this rapid increase: As stated earlier, it is much easier to buy software, so there is more software to buy. The really large software companies have embarked on a strategy of growing their product and feature set by acquiring companies (or sometimes partnering) rather than developing these features and products in-house. Seeing this opportunity, venture capital firms have poured tens of billions of dollars funding software companies. It is now much cheaper to start a software company because of things like distributed computing and the rise of software companies that help other software companies. 10: The number of software products (unique vendors) a company buys has increased 10x in the last ten years This is an even crazier statistic than the first macro trend. Take the average large retailer — ten years ago they had maybe 40 software vendors across their organization … and today they will have 400. One large retailer polled by Siftery has over 2,000 software vendors! The right software stack gives your company a competitive advantage. And because it is so much easier to buy today (and there are more people making buying decisions), companies are becoming increasingly open to buying from a large number of vendors. The major beneficiary of this trend has been start-ups … they can get a beachhead in large companies much faster than they could in the past. Of course, this trend is happening in some industries faster than others. Retailers are super fast adopters — probably because their business is so competitive and the market leaders there (Amazon, Walmart, etc.) are filled with incredibly smart technologists. Companies that have many regulatory requirements to keep their data on premises (like financial institutions and healthcare) have fewer software vendors … but even there we have seen a massive increase in the diversity of vendors. 12: The number of internal buyers has increased 12x in the last ten years The number of people buying software (or influencing the buying) of software in a company has grown dramatically. Almost every professional in a company is now a buyer. Software engineer? Buyer. Salesperson? Buyer. HR Manager? Buyer. Finance person? Buyer. Lawyer? Buyer. Marketing? Definitely a buyer. In 2012, Corporate Executive Board did a study that found buyers did “60% of a typical purchasing decision—researching solutions, ranking options, setting requirements, benchmarking pricing, and so on—before even having a conversation with a supplier.” In fact, only 12% of the users of Siftery are in traditional IT. The other 88% care just as deeply about the software for their organization. And it is easier than ever to buy. Companies like New Relic and Sendbloom have built their company selling one seat at a time to an organization and then using their internal advocates to get larger deals. Freemium software like Slack, Glassdoor, and Cloudflare make it easy to try. Freemium or low cost products mean less red tape and less need for budget approvals in the initial stages of adoption. Engineers, lawyers, salespeople, finance, HR, marketers, etc. that can pick and implement software are the ones rewarded with promotions and bonuses. Professionals that do not develop the core skill of picking the right software vendors will find much more limited career paths. 4: The spend on software has increased 4x in ten years A 4x spend is an astonishingly large increase — and it is because software is easier to buy and it is becoming more and more powerful. Of course, since the number of vendors a company has increased 10x, the dollars per vendor has decreased dramatically in the last ten years. This trend might be worrisome for the giant enterprise software companies but it is really good news for software companies that are innovating and are on offense. One thing to note is that during the last ten years, these same companies that are spending so much more on software have not significantly increased the number of people they employ. In fact, many companies have FEWER employers today than they did ten years ago (even though revenue has increased). The take-away is that companies are choosing to spend on software INSTEAD of people. This may or may not be a good thing for the world … but it is happening and will likely continue to happen. The fragmented world of software will continue Most every macro data point on Siftery shows the trend of the last ten years will continue in the next ten. It’s easy to dismiss that this abundance and fragmentation is just part of a cycle that’ll eventually move towards consolidation. But if one digs a level deeper to look at the forces that created such a vast, fragmented and active product ecosystem, it becomes apparent that this isn’t a trend but a transformation. More business processes are automated now than ever before – run by software or reliant on it. In many cases, we see software talking to software, APIs talking to other APIs. As long as this data can be integrated well, there will be more software. This means there will be even more software companies in the future than there are today. And buyers will be even more overwhelmed and will need help deciding what to buy, how to buy, how to integrate, and how to best use the software. I invested in Siftery (www.Siftery.com) because I love their unique way to help discover software. Siftery believes the products you should use is dependent on the products that you do use. About Auren Hoffman: Auren Hoffman is the former CEO and cofounder of LiveRamp (sold to Acxiom) — the largest middleware provider in marketing technology. He is Chairman of Siftery – using data to help companies better buy enterprise software. He previously served on the board of BrightRoll (acquired by Yahoo). He is the founder of the Dialog Retreat and investor in over 65 active technology companies (https://siftery.com/groups/aurens-portfolio). Auren holds a B.S.E. in Industrial Engineering and Operations Research from UC Berkeley. You can find him on Quora and Twitter (@auren).”

Read More...
posted 2 months ago on gigaom
The tech world is in many ways like a large city. While we spend most of our time in a few neighborhoods, it doesn’t really come as a surprise to encounter an old friend or colleague that you haven’t talked to in years, hanging out in your corner café. So I was not surprised to hear that my old friend Antony Brydon had started a new company with a compelling value proposition, called Directly, and that he and his head of marketing, Lynda Radosevich, thought I’d like to learn about it. Antony Brydon I met Antony when he was CEO of Visible Path, an innovative social startup, one that was working to build the work graph — through email analysis — in the ‘00s. Visible Path was acquired by Hoovers in 2008. Perhaps they were a bit ahead of the market, but that’s a sign that Brydon & Co are generally ahead of the power curve. I was also not surprised that Directly’s value proposition is very, very smart: helping other companies scale their customer support capabilities by breaking out of what I think of as the skills versus wages snare (see figure 1). The snare is this: as the necessary skill level for customer support increases, the wages that must be paid increase. This is a fundamental aspect of supply and demand. figure 1 — The Skills v Wages Snare But Antony and his partners have come up with a way to break the snare, and Directly is the platform on which that magic happens. I recently interviewed Antony about Directly, and why the company is growing so quickly. The Interview Stowe Boyd: One thing I’d like you to do is tell the origin story. In every superhero comic strip, there’s the story that they tell about how Peter Parker got bitten by a radioactive spider, or Steve Jobs visiting PARC. So, what led to the founding of Directly? Antony Brydon: It was not as dramatic as Peter Parker and the radioactive spider, but it was interesting. And it was interesting because the company founding  and the initial  inspiration were probably 15 years apart. Back before Visible Path, I worked in a call center for a few years: back in ’98. I worked for a venture firm which invested in Genesis, Aspect, and Aurum: a lot of the early call center players. Call centers was an interesting beat, but it was a little depressing because the customers were very rarely happy with the service they were receiving. – Antony Brydon For the better part of three years, that was my beat. And I was looking at a lot of Fortune 100 call center buyers to figure out what products they needed. Call centers was an interesting beat, but it was a little depressing because the customers were very rarely happy with the service they were receiving. The agents in the call center were a downtrodden labor force: modern-day galley ships. The call center managers were doing the best they could, but often saw themselves as sort going to war with the tools they had. Under equipped and underserved. Very tactical. Genesis had some incredible skill-based routing that I used in a project. It could actually connect a customer to an absolute perfect agent. But that capability wasn’t being used very broadly. There were a lot of reasons it wasn’t. Some of the call centers didn’t want to hire the truly skilled agents because it could cost more money. SB: Right, and I bet they didn’t want to test people to gather all that data, either. That was expensive, too. AB: Yes. They didn’t want to break their agents down into different pools based on skills levels because utilization would drop. If you had one homogeneous pool, you could give a call to anybody. If you had 10 skills-based pools, utilization fell through the floor, so the economics fell down. A great example of really great technology that didn’t get applied anywhere near the degree that it should because of the constraints in the business model. And that, entrepreneurially, was pretty depressing to watch. My diagnosis was that the problem wasn’t the technology. The problem was the talent side of the equation, and how that talent was being managed. There was no amount of technology to fix it. When you’ve had very low skill folks being hired at very low wages, who are being pushed to handle 60, 80, 100 customers a day and sort of being churned out 12, 13 months later once they met the learning curve, well that invariably put a low ceiling on customer experience. A depressing trend, and that was the reason I made a conscientious move to stop working on it That wasn’t the inspiration for Directly, per se, but that was my diagnosis at the time. That was the real problem. It made me very unexcited. For the last kind of 15 years, whenever Jeff [Paterson, the co-founder and head of product at Directly] and I sold a company, we would come back to this problem that had bothered me a long time ago and just started hacking to see if we had any new approach or insight that we had that we could bring to bear on it. SB: You’re saying the obvious thing was being blocked by the business model, which led low-skilled people to become more skilled, and then their higher wages would become unaffordable. AB: Yes, absolutely. Skilled workers could have been created very quickly. So, if you had a Samsung Galaxy question that was routed to a Samsung Galaxy expert, and that person was doing a normal number of inquiries for the day and wouldn’t have the boot on the back of their neck to get off the phone in 6 minutes. But that was blocked by the economics of the business model, and that blocked the talent model. We looked at this in 2001 after eMusic had just got acquired. We looked at it in 2008 after we sold Visible Path. When we came back to it in 2011, we had the benefit of seeing a lot of these on-demand companies coming up. We saw Lyft coming up. And Uber coming up. And Airbnb. And all of these companies taking advantage of a kind of fractional availability. When we applied that to the old problem, we saw for the first time we saw the potential of pulling together people with much higher skill sets than would ever sit in a call center. And then making them available for small fractions of time. When their skill set really matched a customer’s problem, and at levels where they could really opt in and delight a customer. With that combination of the ability to get more talent than had existed before, and then also to render that talent. Making the business model, the economics of it, work. So that was the initial insight. That’s what started down that pathway quickly, building some quick primitive apps and starting testing that idea. It bore out very quickly. We did some initial tests in kind of 6, 12 weeks of initial building. We got some very good signals back that we could attract very skilled folks and pair them with customers very efficiently and very quickly. It took a lot of work to actually develop the engines and all of the enterprise innovations. But those are the book ends. There was that insight in ’98 that no amount technology could fix broken talent model and a broken business model. Then there was that ‘aha’ in 2011 and 2012 that the on-demand piece would be a very elegant solution to this problem. At that point in the interview, I started to sketch a 3D model in my journal, based on Antony’s use of the term ‘Fractional Availability’. In figure 1, the 2D model above, we saw the snare, the dead-end that Antony has described as an economic problem that technology couldn’t fix in 1998. But with the surge of interest in on demand platforms — like Uber, Lyft, and Airbnb — a new alternative appears. An additional dimension appears, so that people with high skills can still be affordable, because they don’t have to be hired as full-time workers sitting in a call center. They can be paid only for the time that they are handling customer support requests, and they have gained their expertise at no expense to the company using their services on a part-time, on-demand basis. They’ve gained that skill as a power user, on their own time, for their own reasons. Figure 2 — Fractional Availability Here we see the 3D reality: it is possible to pay low ‘wages’ (perhaps ‘costs’ would be better) for highly-skilled customer support because of the wormhole that is fractional availability. (I think of it as something like the Guild Navigators in the Dune series, who can ‘fold space’ and travel ‘without moving’ from one star system to another light years away.) So, if you take the 3D chart above and look at it end on — concealing the dimension of fractional availability, it would look like figure 3, below. Directly allows companies to shift the needle from high costs for high skills, by tapping into the game-changing economics of fractional availability. Figure 3 — Back in Two Dimensions Theories into Practice In the next post in this series, I will dig into the specifics of how Directly has moved the needle at a specific customer’s call center operations, a case study based on the adoption of Directly at MobileIron. But behind that practical and tactical step-by-step adoption is the origin story of Antony’s frustration with call center economics in 1998, and the role that fractional availability is having on the world. Directly’s success has come from harnessing that theory, and making it do the heavy lifting. Also, you might like to hear Antony and Jonathan Keane, Director of Customer Service at Republic Wireless in a livecast on the topic: How To Deliver 2.9 Minute Response Times During A New Product Launch, Thursday, February 25 @ 11:00am PST/ 2:00pm EST. Directly has sponsored this post, but the opinions are my own and don’t necessarily represent Directly’s positions or strategies.

Read More...
posted 2 months ago on gigaom
A new milestone in the maturation of the Internet of Things has been reached: two contending organizations — the Open Interconnect Consortium (backed by Intel and others), and the AllSeen Alliance (back by Qualcomm and others) are merging to form the Open Connectivity Foundation. This is a big step, and one that may help break the logjam in the market. After all, consumers are justifiably concerned about making a bet in home automation — for example — if they are unsure about how various devices may or may not interoperate. Aaron Tilley points out that IoT has seemed to be, so far, all hat and no cattle: In some ways, the Internet of Things still feels like empty tech jargon. It’s hard to lump all these different, disparate things together and talk about them in a meaningful way. Maybe once all these things really begin talking to each other, the term will be more appropriate. But for now, there is still a mess in the number of standards out there in the Internet of Things. People have frequently compared it to the VHS-Betamax videotape format war of the 1980s. The VHS-Betamax format war was not solved by standardization, it was the VHS vendors making the devil’s bargain with porn companies. The OCF may be more like the creation of the SQL standard, where a number of slightly different implementations of relational database technology decided to standardize on the intersection of the various products, and that led to corporations to invest when before they had been stalling. The consortium includes — beside Intel and Qualcomm — ARRIS, CableLabs, Cisco, Electrolux, GE Digital, Samsung, and Microsoft. Terry Myerson, Executive Vice President, Windows and Devices Group at Microsoft announced the company’s participation in the creation of the OCF, and spelling out Microsoft’s plans: We have helped lead the formation of the OCF because we believe deeply in its vision and the potential an open standard can deliver. Despite the opportunity and promise of IoT to connect devices in the home or in businesses, competition between various open standards and closed company protocols have slowed adoption and innovation. […] Windows 10 devices will natively interoperate with the new OCF standard, making it easy for Windows to discover, communicate, and orchestrate multiple IoT devices in the home, in business, and beyond. The OCF standards will also be fully compatible with the 200 million Windows 10 devices that are “designed for AllSeen” today. We are designing Windows 10 to be the ideal OS platform for Things, and the Azure IoT platform to be the best cloud companion for Things, and for both of them to interoperate with all Things. Microsoft was late to the party on mobile, but Nadella’s leadership seems to be all about getting in early on other emerging technologies, like IoT, machine learning, and modern productivity. Noticeably absent are the other Internet giants: Apple, Amazon, and Google. When will they get on board?

Read More...
posted 3 months ago on gigaom
When people ask me what I am researching, I confess it can be a difficult question to answer. Sometimes I will highlight a topic I have written a report about, such as mobile payments or the Internet of Things; at other points, I’ll just pick something that is current but not too overblown. Right now, the latter could be machine learning and algorithms for example, which has long roots but which is generating quite a lot of interest. The fact of the matter, however, is that my brain doesn’t work like that. Technology is having a profound impact on just about every aspect of what we do, how we work and indeed, how long we live. While it would make sense to specialise, as indeed I have done at various points in my career, parts of my brain refuse to co-operate, insisting on following lines of thinking that I would rather they didn’t. It’s like pulling a thread on a jumper. Right now, for example, Virtual Reality is rocking my boat — but why? I can’t help asking the so-what questions: what does it do for us, where does it go that we can’t otherwise, and so on. Frequently the answer lies not in any one technology, but in the way they inter-operate: the Internet of Things is in reality a vast, distributed data generator, very little of the latter we have yet learned to exploit. All of which means that, while the various cool (a.k.a. hyped) things going on in technology are interesting, my own view is that they will have more of an impact as they start converging. We could be… no, let me rephrase, we are just scratching the surface of what technology enables us to do. In the book I am drafting, to be called Smart Shift, I call this the Law of Diminishing Thresholds — as costs fall, so the solution space expands. Exponentially. I believe that the last few years, as interesting as they have been, will be seen as a pause in the overall cycle of innovation, as the tide withdraws only to come crashing back upon the shore. Even the current darlings of the industry are reaping what has already been sown; but new seeds are already growing, and it is on the basis of these that the next wave of corporate and societal change will be felt. In the knowledge that we are on the brink of a breakthrough, I present a technological map of the future. I drew this largely to get this off my chest (and boy, is that a relief) but also to set the scene. Anybody focusing on one area or another is missing the emerging bigger picture, and as the future will be build upon this foundation, so it is important to see as much of it as possible. At the moment I have more questions than answers — another confession. I wish it were different because, if I knew what the future held, I could start placing a few bets. Sure, robots; sure, semantics; sure, self-programming and orchestration, all of which will have a profound impact. I’ll be expanding on these topics here at GigaOm. As things stand however, I will make only one hard prediction: hold on to your hats, it’s going to be quite a ride. P.S. If you have any questions or you want me to run through the technological map of the future, please do get in touch. Original version posted on LinkedIn on 16 February 2016.

Read More...
posted 3 months ago on gigaom
Doctor. Lawyer. Architect. Professional baseball player. Dragon slayer! Of all the things kids hope to be when they grow up, “IT Guy” almost never tops the list. Yet here we are, many of us parlaying a love for computer games or coding into a 40-hour/week job where our passion for all things digital takes a sharp turn for the mundane. It’s a simple fact: there’s just too much minutiae to really enjoy being an IT professional most days. Our natural born problem-solving skills – talents, really – go underutilized thanks to clunky, antiquated laptops, copiers, and printers that demand way more time than they’re worth. Where’s the fun in that? The common thread among IT professionals is that we love technology. It’s in our titles, after all. Let’s pause for just a moment for full disclosure: HP sponsored this post, challenging us to #ReinventTheOffice here at Gigaom. And oh, do we love a challenge. But aside from a deep-seated appreciation for shiny new tech toys, how can office machines really improve corporate life? It’s in the details. Give your employees free coffee, welcome to the status quo. Give your employees a free-roaming barista with coffee, lattes, and macchiatos…welcome to utopia! Everyone appreciates better, especially when it makes life easier. It’s what sets a good printer apart from a great one, and HP is positive they’ve got the special sauce. Until A.I. officially takes over, business machines have one critical thing going for them: they’re free from the emotions, physical limitations, and time constraints the rest of us humans have to contend with every day. They can work harder, faster, longer, and better than us mere mortals at many tasks, upping the ante on productivity and efficiency and making us look really good at our jobs. Investing in industry-leading technology, therefore, makes perfect sense. We get away from the basic, day-to-day management of a multitude of office machines and instead focus on development that really moves the needle for business. We feel valued, accomplished, and part of the team. Finally, our futuristic thinking comes into play! Problems are solved! World peace is restored! Well. To put it simply, increasing a company’s efficiency and the quality of work makes everyone happy. When we’re allowed to work smarter, not harder, we thrive. And if you’re the IT Guy, the addition of innovative technology is just icing on the cake. It’s not a new concept, but trusting office machines to tackle multiple tasks, and do them exceptionally well, is no easy feat. There’s not an IT manager in the world who isn’t on the hunt for the most streamlined, high-performing devices on the market that meet these requirements and then some. We want it all: speed, high output, low cost, low maintenance. And, to be honest, gadgets are cool. It’s even cooler when the latest, greatest devices really do improve our business. If wanting the biggest, baddest, and best for our team is wrong, we don’t want to be right. Let’s get back to that coffee idea. If we’re truly going to #ReinventTheOffice, we have to combine the finer details that set a business apart – expert barista! – with the technologic innovations that drive it forward – streamlined machines! The result is kind of a big deal. HP promises to take the workhorses of the office, the printers we use every day, and make them faster, leaner, and smarter so we can focus on doing business and not on managing our equipment. Add the industry’s most advanced security and you’re practically in unicorn territory. It’s an aggressive agenda, but in the interest of future-thinking and world peace, we’re willing to try it out. IT professionals serve double duty as both internal support and tech influencers at Gigaom. As a company, we bring the highest caliber news, editorial, and product information to our audience, and we implement our own best advice to keep our office running at its best. We’re taking a hard look at HP’s #ReinventTheOffice challenge, which aligns perfectly with our goal to stay on the leading edge of business technology. As usual, our readers will directly benefit from these efforts, but our IT managers will undoubtedly enjoy testing out that new, shiny toy. Stay tuned.

Read More...
posted 3 months ago on gigaom
2015 was the year when an unprecedented number of users took action against the ads that slowed web pages and turned the online content experience into a frustrating game of close-that-ad. According to a PageFair and Adobe report, U.S. ad blocking grew 48% in the twelve months leading up to June 2015. That’s 45 million users—16% of the population—who just said no to digital and, in particular, mobile web advertising by downloading ad blocking applications. With eyeballs and revenue on the line, thought leaders debated whether the ad blocking trend would destroy or save advertising. The Association of National Advertisers (ANA) blamed the digital ecosystem. The Internet Advertising Bureau (IAB) blamed themselves for having “lost track of the user experience.” (They also notably took ad blockers to task for disingenuous practices, most specifically paid “whitelists” for publishers.) The cost of ad blocking is significant, with an estimated $781 million dollar loss for the industry. But another resonating impact of the Great Ad Rebellion of 2015 will be found in its influence on marketing investments. What will marketers do differently to navigate the digital/mobile landscape in 2016? Revisiting advertising Lest there is any question, ad blocking will not prompt an all-out surrender by the ad ecosystem. Some publishers, like GQ, Forbes and more recently Wired, are fighting fire with fire, by blocking users with ad blockers. But the longer term strategy is to address the issues with ad experience. Some of this responsibility falls on publishers, who determine the degree of disruption that must be tolerated to access content, as well as the ad tech landscape, where fierce competition can inspire extreme approaches to ad engagement. (To steer publishers and platforms to a more user-friendly approach, and as part of its mea culpa, the IAB introduced new guidelines that emphasize ‘light, encrypted, ad choice supported, non-invasive ads’.) But no change can succeed unless marketers direct ad dollars to those that are innovating in favor of an improved experience. This isn’t a simple task, given that site-by-site scrutiny can work against the efficiency gains of programmatic buying, a practice that has itself been blamed for the surge in ad blocking. As such, there will also be other moves to optimize ad impact, including increased investment in emotionally-aware ads, where data is used to extrapolate insights about a user’s psychological state in a given moment. Incorporating a measure of receptivity into ad delivery could prove to be the much-needed difference between engaging a consumer and ticking them off. Thinking beyond advertising Ongoing concerns about ad ROI will prompt more marketers to deepen investments in other approaches. Native advertising, the modern day equivalent of the advertorial, offers a worthy complement to traditional ads. Content marketing and branded content will help brands meet the need to feed social channels. Influencer marketing will gain practitioners as marketers struggle to connect with elusive millennial audiences. We’ll also see more brands practicing corporate social responsibility and, of course, promoting those good deeds via social channels. Each of these tactics offer a subtler alternative to the traditional advertising message. And while this can be a strength in an oversaturated landscape, there is a fine line between subtle marketing and the calculated manipulation of audiences. The FTC tuned into this, releasing guidelines to ensure consumers can distinguish native advertising from content. But marketing’s most powerful critics are the consumers themselves, which leads to the next point… Embracing feedback—in all forms In a world of 24/7 marketing, brands are constantly challenged to creatively and authentically engage consumers in “conversation”.  The always-on dialogue represents tremendous opportunity, but it doesn’t come without risk. Today consumers are quick to call brands out when they’ve missed the mark, even when it’s as seemingly innocuous as Red Lobster’s slow response to a shout out from Beyoncé. Success doesn’t grant immunity either, as is evidenced by the less than warm welcome REI received on Reddit following its widely-celebrated #optoutside campaign. This vulnerability could make one want to crawl back into the safe confines of traditional marketing, but of course that’s not an option. In 2016, more marketers will have strategies in place that allow them to creatively participate in the two-way dialogue while also managing the inherent risk. This means more than having an ear to the ground; brands need a plan that allows them to quickly gauge when and how—or if—it makes sense to engage or respond. (Arby’s farewell to their consistent critic Jon Stewart is a stellar example of a brand creatively and effectively steering into negative feedback.) It may be that consumer ad blocking is really only part of this feedback cycle— less a mass exodus from advertising than it is an aggressive critique of its current form. Either way, it is a milestone in the ongoing transition from one-way marketing, perhaps one of the last nails in the coffin. Today, consumers have more than just a voice—they control the levers on which messages they receive and when. Marketers will need to keep in mind throughout the execution of every strategy and tactic to have an edge in 2016 and beyond.

Read More...
posted 3 months ago on gigaom
The transition to cloud computing – at the current snail pace doesn’t warrant ‘transformation’ rhetoric – may be getting a goose because of tightening finances in the corporate world. A time when the purported risks of cloud computing are moderated by companies hungry for cost-cutting. And that may be part of the slow-down in tech spending right now. As Jay Greene writes in the WSJ, Hesitance [sic: hesitancy] among chief information officers to commit to long-term hardware and software purchases may reflect the gradual shift from corporate data centers to so-called public cloud offerings from companies such as Amazon.com Inc. and Microsoft Corp., Deustche Bank analyst Karl Keirstead wrote in a research report. “It is entirely plausible that this is having at least a marginal impact on the desire of large enterprises to sign material and multi-year commitments to on-premise technology suppliers,” Mr. Keirstead wrote. Gartner research chief Peter Sondergaard made a related observation at the recent Wall Street Journal CIO Conference, noting that budget pressures are pushing corporate technology managers to take a close look at their options. “I think many [CIOs] have benefited from pressures in central IT budgets, in that it has created opportunity for looking at different alternatives,” Mr. Sondergaard said. Take Ted Ross, CIO of the city of Los Angeles. He needed to upgrade the technology that powers the city’s Business Assistance Virtual Network, the site where vendors bid for projects from various city agencies. Ross considered buying new blade servers to host the site.Instead, he decided to run the site on Microsoft’s Azure technology.He’ll halve his costs, and the migration should take four to six weeks, he said. “It really seems it’s more judicious to make the investment in the cloud,” Mr. Ross said. The winners in this foot race? Amazon AWS is the market monster, with Microsoft a strong #2 with Azure and the company’s productivity products. Google is perceived as a trailing #3. But the larger market of SaaS players are going to benefit from this windfall, and the more traditional enterprise hardware and software players – HP, SAP, and the like – will be facing increasingly strong down drafts in this turbulent and accelerating market. Originally posted at stoweboyd.com on 17 February 2016.

Read More...
posted 3 months ago on gigaom
Putting the burden of retraining in a digital world on the backs of the workers may be as ‘enlightened’ a policy as we’ll see, in the postnormal economy. Reading a piece on AT&T’s CEO, Randall Stephenson, and his plans to retool the company for an accelerating and vastly different world, one in which his company will be competing with Google and Amazon, not just traditional phone companies. And to get there, the company will have to retrain — or replace — many of its 280,000 workers. But today’s businesses are not going to take on the burdens of such a massive training effort: they will instead expect workers to dig their own hole and sharpen their own shovel, as I put it. Before anyone can get reengaged with their job, they have to get reengaged with their work, on a personal level. As I wrote, This is where a truce has to be called and each individual commits to personal program of engagement in what they consider their calling, which may only obliquely line up with the job that the company has that person doing. This involves reading, reflection, discussions with other like-minded people, and sharing and growing those thoughts in groups, offline and online. My expression for this investment, where the individual reengages with their own work, in a sense independently of the company (or companies) they may be working for, is this: Dig your own hole, sharpen your own shovel. And this will involve time. Each person will have to carve out time for this engagement: it won’t just happen. AT&T seems to be making this corporate policy. Quentin Hardy, Gearing Up for the Cloud, AT&T Tells Its Workers: Adapt, or ElseIn an ambitious corporate education program that started about two years ago, he is offering to pay for classes (at least some of them) to help employees modernize their skills. But there’s a catch: They have to take these classes on their own time and sometimes pay for them with their own money. To Mr. Stephenson, it should be an easy choice for most workers: Learn new skills or find your career choices are very limited. “There is a need to retool yourself, and you should not expect to stop,” he said in a recent interview at AT&T’s Dallas headquarters. People who do not spend five to 10 hours a week in online learning, he added, “will obsolete themselves with the technology.” […] Companies’ reinventing themselves to compete with more nimble competitors is hardly a new story. Many have tried, and a handful have even succeeded. Mr. Stephenson wants AT&T to be among those few. In the last three years, he has spent more than $20 billion annually, primarily on building the digital business. DirecTV was acquired in a $63 billion deal last year, and several billion more was spent to buy wireless businesses in Mexico and the United States. Even for a company with $147 billion in 2015 revenue and over $400 billion in assets built up over more than a century, it’s a lot. That can’t happen unless at least some of his work force is retrained to deal with the technology. It’s not a young group: The average tenure at AT&T is 12 years, or 22 years if you don’t count the people working in call centers. And many employees don’t have experience writing open-source software or casually analyzing terabytes of customer data. If you don’t develop the new skills, you won’t be fired — at least AT&T won’t say as much — but you won’t have much of a future. The company isn’t too worried about people leaving, since executives estimate that eventually AT&T could get by with one-third fewer workers. Mr. Stephenson declined to project how many workers he might have by 2020, when the cloud-based system is supposed to be fully in place. One thing about cutting people in an aging work force, he noted, is that “demography is on our side.” Other senior executives say shrinking the work force by 30 percent is not out of the question. AT&T’s Vision2020 program for employee education is based on workers giving up time on nights and weekends — uncompensated — but with the company reimbursing the cost of courses up to $8000/year. My bet is that this is the new basis for strategic commitment to an educated workforce: the company will pay the out-of-pocket costs, but the worker still has to hold down a full-time (or more than full-time) job, and to dedicate serious amounts of ‘leisure’ time to coursework, time that normally would be spent on outside interests, family, or moonlighting. Depending on your perspective, this looks like a fair deal, an additional encroachment of work into the personal time of workers, or just the way things are, now. And this might be the best deal workers can get, in an economic climate of endemic recessionary philosophy mixed with the threat of becoming obsolete in a marketplace driven by high technology, and being hollowed out by automation, AI and algorithms, and free-trade outsourcing of work abroad. Originally published on medium on 13 February 2016.

Read More...
posted 3 months ago on gigaom
The US National Highway Traffic Safety Agency may have adroitly resolved the notion of driver accountability for the coming smart car future. It may sound like a ‘through the looking glass’ paradox, but the US National Highway Traffic Safety Agency (NHTSA) has decided — in the face of relentless innovation in driverless vehicles — that cars can be their own drivers. This has enormous implications, and was motivated by the design issues of future AI-driven cars. Google’s Chris Urmson, the director of Google’s driverless car initiative, raised the issue with NHTSA, asking how the agency intreprets Federal Motor Vehicle Safety Standards (FMVSS) vis-à-vis smart cars: Wayne Cunningham, Feds declare that Google’s self-driving car is its own driver NHTSA posted a detailed response on its Web site. The response shows that Google was concerned how the FMVSS could be applied to a computer-controlled car lacking steering wheel or any other traditional driver controls. Urmson suggested that NHTSA could interpret the FMVSS as not to apply to Google’s cars at all, or that it require a traditional interpretation, assuming a driver in the left front seat, or that the system controlling the car could be considered the driver. In NHTSA’s letter, it chose the latter solution, determining that the self-driving system is the driver for purposes of the FMVSS. So, this in principle means that Google (and others) can design cars that have no requirement for human-oriented driver’s controls, like steering wheels, accelerators, brakes, or rear-view mirrors, for example. If an AI-driven car is its own driver and no person riding in the car is playing that role, then in the case of an accident there is no human responsible since the car is the driver. But secondly, this might open the door to something perhaps just as important. If an AI-driven car is its own driver and no person riding in the car is playing that role, then in the case of an accident there is no human responsible since the car is the driver. The NHTSA may have adroitly resolved the notion of driver accountability for the coming smart car future. Originally published at stoweboyd.com and workfutures.io on 10 February 2016.

Read More...
posted 3 months ago on gigaom
IBM sent some questions following the recent IBM Connect conference. They are based on some unwritten assumptions that I disagree with, which will become evident in my response. Here’s the questions: What is your definition of a successful social enterprise? Why do companies consider forming an enterprise-wide social network and what are the biggest benefits? How are enterprise social networks used to share knowledge and increase innovation? What hurdles do organizations face when implementing an enterprise social network? How can you overcome these hurdles? How do you see enterprise social networks evolving over the next 5 years? Some answers: Q1: What is your definition of a successful social enterprise? A1. The idea of a ‘successful social enterprise’ is simple if you approach it superficially. In that case you simply define ‘success’ as some degree of adoption of social tools, and the harvesting of their purported benefits based on the network effects of social integration. A richer, and more nuanced definition requires a deep dive into significant changes in people’s aspirations, corporate values, and dispersal of tech platforms that underwrite new ways of work, not just new ways to communicate. (But this is not the place for that book to be written.) A sense that the promise of social collaboration has failed is the backdrop for many companies and teams moving to try work chat-based solutions, and the resurgence in the use of email. — Stowe Boyd Q2: Why do companies consider forming an enterprise-wide social network and what are the biggest benefits? A2. There is actually a large-scale migration away from the now-mainstream model of ‘social business = a company using enterprise social network as platform for communication, collaboration, and coordination’. A sense that the promise of social collaboration has failed is the backdrop for many companies and teams moving to try work chat-based solutions, and the resurgence in the use of email, now somewhat socialized (like IBM Verse and Microsoft Office 365). Q3: How are enterprise social networks used to share knowledge and increase innovation? A3. Information sharing (mistakenly called ‘knowledge sharing’) is one of the most direct benefits of social platforms, of whatever kind. They decrease the costs involved, and the social motifs — like following, @mentions, and topical activity streams — have revolutionized how we think about working together. I think increasing innovation is a separate, but immensely important issue. Tools need to stand out of the way, drop into the near background, so that innovation can happen: they don’t engender creativity, per se. In a few years the inroads made by touch, voice, gesture, and surreality will have profound impacts on how people at work choose to communicate. — Stowe Boyd Q4: What hurdles do organizations face when implementing an enterprise social network? How can you overcome these hurdles? A4. The hurdles of adopting any innovation — like a new communications or information platform for business — are consistently the same. First, people differ to the degree they are psychologically disposed toward adoption of new technologies and techniques (and the values that come along with them). So-called innovators — Ed Rogers’ term — are quick to adopt, and the laggards are most averse, and the rest of us are distributed in between in other groups: early adopters, early majority, and late majority. That’s the nature of people. Each group has its own set of concerns and considerations that slow adoption to a greater or lesser extent. This is independent of the specifics of any technology or the dynamics of any company, and dominates The Diffusion of Innovations, which is why Everett Rogers named his magisterial book that. In the case of ESNs, adoption has been problematic because the benefits are difficult to quantify, are slow to be realized (if at all), and the established alternatives (like email) are deeply embedded in business practices and processes. This has been so slow a process that innovators and early adopters are jumping the curve and moving onto new approaches before the majority has adopted the old ones. So ESNs are already a lap behind in the communications platform foot race. Q5. How do you see enterprise social networks evolving over the next 5 years? A5. The continued acceleration toward mobile, wearables, and augmented and virtual reality (or surreality, as I call it) will mean even more of a migration away from desktop/laptop use and the decline of ‘desktop’ motifs. In a few years the inroads made by touch, voice, gesture, and surreality will have profound impacts on how people at work choose to communicate. Added to the rapid rise of AI assistants (or assistance, depending on your view), the premises of ‘working together’ will change as much as the Web has done, already. So, while we will still be working in social networks in five years — we are human beings after all — we will be unlikely to be using platforms based on the design and organizing principles of what we call ESNs, today.  Cross-posted from medium and stoweboyd.com on 8 February 2016.

Read More...
posted 3 months ago on gigaom
On Thursday, LinkedIn posted some very disappointing numbers, and the result was a massive bailout on the stock. The companies reported losses and slowing growth led to erasing nearly $11 billion in the professional networking site’s market value. Combined with lowered forecasts for the year, this translated into about 40% drop in the company’s valuation. Another major collapse in confidence seems to be hitting Tableau, which dropped 45% in after hour trading on Thursday, after announcing higher than projected revenue and earnings per share, but a real slowdown in licensing revenue. Twitter continues to stumble, losing 5%. Facebook likewise took a 5% drop. The tech selloff with Apple (2.67% drop) and Amazon (6.36% down). Box fell 7.44%. The tech market appears to be getting whipsawed by the uncertainties in the world economy, with those showing the most significant drop off in past and projected revenues getting hammered. But is there something larger at work? I read a great analysis by Jessica Lessin at The Information, suggesting that there may be. In The End of Tech Startups she writes, […] the period where tech startups can readily disrupt larger tech companies is ending for a simple reason: Today’s tech behemoths aren’t the lumbering giants of yesteryear. They are leaner and meaner and more competitive precisely because they have co-opted the same technologies startups used to attack them. Take cloud computing. Sure, AWS makes it dead simple for two developers in a garage to spin up a company. But Microsoft, Facebook and Google have massive cloud infrastructure advantages of their own. In fact, they’re the ones powering some of these startups. Anything startups have access to, big tech companies have access to in a much deeper way. So they can operate faster—and test faster. And because they can test faster, they can build faster. Then consider internal communications. One of the biggest advantages any startup has is the ability to make decisions and communicate quickly without layers of bureaucracy. Often they do so by adopting the latest sort of collaboration method quickly. […] To all you aspiring tech entrepreneurs out there, it’s time to get creative if you want to take on a tech company. And if you don’t, there’s still plenty of opportunity going after non-tech incumbents in everything from media to education and health, which is probably why we’re seeing so many startups turn their attention outside of tech these days. Lessin suggests we’re moving to an era where the Internet giants simply will have too much juice for startups to prevail against them. I think this borne out in many sectors, like the melting away of the valuations (and opportunities) for file sync-and-share companies, like Dropbox and Box, as the monsters move in and drop the price point to zero. So, as the bull market grinds on in the coming months, note the difference in the losses that the market will deal to larger and smaller players. The LinkedIns and Tableaus will lose much more than the giants, and the giants will continue to turn the screws, leveraging their positional, financial, and operational advantages. They will continue to win even as investors lose. And startups will face the worst conditions: less capital, worse valuations, very strong entrenched Internet giants dominating in all important markets.

Read More...