In this series, professionals attending Next:Economy share their insights on the future of work. Read the posts here, then write your own. Use #NextEconomy somewhere in the body of the post and @mention Next:Economy conference panelists when sharing. For more insight and news on the Next:Economy, sign up for the weekly newsletter here.
The idea that we’re shifting to the "next economy," to borrow the title of an O’Reilly Media conference I recently co-hosted with Tim O’Reilly in San Francisco, presupposes that our current one is ending.
And that of course can be an unsettling prospect. People are wondering how they’ll pay the bills. Where they’ll find purpose. What life will be like in a world where AI, augmented reality, and the Internet of Things proliferate so rapidly that even the most diehard technophiles begin to wonder how long they can keep up with the treadmill of progress.
But while fear and uncertainty are natural precursors to major change, the key takeaway at the Next:Economy conference this year was the sense of optimism that informs the cultural shift that is underway.
Entrepreneurs are well-acquainted with emotions like fear and uncertainty, because even in eras of stasis and conformity, they choose risk and the unknown over the tried and true. And so they recognize the potential of this current moment when so much is in play. Existing companies can use new technologies to create new competitive advantages. Startups can create new markets, introducing products that people don’t even realize they need – until they try them and realize they can’t live without them.
On the stage at Next:Economy, 19-year-old Stanford student Joshua Browderoffered one example of this phenomenon. A little over a year ago, after he’d just obtained his driver’s license in England, his home country, he found himself getting more parking tickets than his parents were willing to foot the bill for.
So Browder started searching legal regulations for information he could use to appeal these tickets. The appeals he generated using his newfound knowledge were often successful, and as a result, he soon became the "local parking ticket guru" in his North London neighborhood, dispensing advice to friends and family.
To streamline this process, he created a website in August 2015 called DoNotPay.co.uk that helps people draft their own appeals via a chatbot that guides them through the process. At Next:Economy, he told the audience that the site’s users have successfully contested 180,000 tickets and saved approximately $5 million in the process to date.
But that’s just the start of Browder’s story. Once people saw how his site helped them with parking tickets, they started asking about other legal issues, such as seeking help with evictions or repossessions.
Quickly, Browder realized there were vast swathes of legal assistance that he could automate with his "robot lawyer" and provide to people for free.
As a result of this highly automated service, people who could never afford traditional legal assistance have a new resource at their disposal. And various kinds of legal actions that may not have made economic sense to pursue using human lawyers – such as challenging parking tickets or seeking compensation for delayed airline flights – suddenly become feasible in a world of robot lawyers.
Of course, in using AI technologies to create new services that make legal assistance more accessible to people who have often never used traditional legal services, Browder is setting the stage for further disruption. Many services that legal professionals have made a good living charging for can be automated too, and as Browder’s "robot lawyer" grows more sophisticated, it will increasingly compete with human lawyers.
Ultimately, Browder believes DoNotPay.co.uk will help streamline government services and deliver value to millions of people who need help navigating the different regulatory mazes that governments put into place. But its potential to displace some human lawyers touches on the primary theme of this year’s Next:Economy: Namely, how do we create a new, tech-enabled economy that keeps humans in the loop – and not just in a nominal or obligatory way, but rather, in a way that is both financially rewarding and personally fulfilling?
Or to put it another way: How can we use automation, AI, and other technologies to make human work better, instead of making it obsolete?
This isn’t an easy challenge. But many of the people who spoke at Next:Economy are already exploring how to do this. Paul English, founder of the travel booking start-up Lola, explained how the app uses AI to amplify the efforts of live agents who interact with customers. In my conversation with IBM’s David Kenny, who oversees the company’s Watson initiative, Kenny noted how in medical contexts, Watson takes over the "grunt work," like radiology diagnoses, so human physicians can focus on solving the hard problems.
Historically, both markets and government policies have rewarded entrepreneurs, investors, and inventors for being more efficient with capital by reducing the costs of labor. And this approach has had huge social benefits. Increasingly efficient deployment of capital led to an abundance of goods and services that both increased human well-being and simultaneously created more opportunities for people to pursue a wider range of meaningful work.
But now that we’re reaching a point where it is technologically possible to remove greater numbers of human workers from the system, we should think about new ways to allocate rewards to capital that privilege the creation of meaningful human work. Moving forward, we must incent the entrepreneurs, investors, and inventors who create the businesses that lead to jobs in the right ways.
For example, what if capital that is used to create companies that employ more than, say, 100 employees at an average salary of $75,000 is taxed at a lower rate than companies which do not meet these thresholds?
Of course, before anyone turns a blue-sky idea like this into actual policy, a great deal of due diligence and iteration would have to take place. What are the optimal thresholds to use? How might the policy be gamed, and what unintended consequences might it produce?
I present it not as a fully baked policy proposal, but rather as a thought exercise that shows how economic ecosystems are never inevitable manifestations of "natural" laws or principles, but rather the product of incentives and regulations that privilege certain actions and values over others.
As we look to deploy AI and other technologies that have the potential to radically transform our economy, our workplaces, and even our sense of what it means to be human, we must remember this basic fact: The choices about the incentives we create and the values we favor are ours to make.
And if we thoroughly understand all the factors in play, and embrace the future with a thoughtful but fearless and adaptive mindset, it is well within our power to make the right ones.
This post was originally published here on October 20, 2016