Blog

For decades now climatologists have agreed that the first symptoms of global warming would be extreme weather events. (Here’s my 1989 Los Angeles Times article on the early researchers in the field.)

And I’ve argued that it will be extreme weather events that catalyze public opinion to demand further climate action from both governments and corporations.  As humans, we really can’t perceive “climate”--it’s just too long a time frame.  What we do understand is weather.

We’ve always named the traditional forms of extreme weather: cyclones and hurricanes. But in 2015 the UK started also naming severe storms (Angus, in 2016, disrupted transportation throughout the country).  And severe storm names are growing more popular in the US (Jonas, the same year, set numerous East Coast records).

This summer, Europe is in the midst of a record-setting heatwave, and it’s been named as well: Lucifer. In the act of naming extreme weather events, we take them more seriously, and perhaps we will ultimately demand that our governments do the same.   

I worked last week with a major credit card company, and one topic was whether cash will disappear.  Will there come a day when all transactions are electronic, perhaps using your smart phone--or even, say, just your fingerprint--and cash will be kept only in museums?  

Some countries are almost there--in Sweden, for example, half the banks keep no cash on hand. Many restaurants and coffee houses no longer accept cash and churches, flea markets and even panhandlers take mobile phone payments.

For merchants, going cashless lessens the threat of robbery and eliminates daily treks to the bank. This summer in the United States, Visa International announced it will give $10,000 grants to selected restaurant and food vendors who agree to stop accepting cash. (Merchants, of course, also pay a fee for every electronic transaction, significantly more in the US than in Europe.)  

So is this the end of cash?  As the saying goes, it’s complicated.

Electronic payments may be more convenient, but cash is still anonymous.  As a result cash fuels criminal ventures as well as tax evasion. Large bills, in particular: $50,000 in $100s is a convenient stack only about 4 inches high. The EU will stop printing €500 notes, a criminal favorite, in 2018 and there are calls to eliminate the $100 bill in the US.   And cash also powers the underground economy for tax evasion.

Governments, in short, might be just as happy to get rid of cash entirely.  But many law-abiding citizens consider the privacy of cash a valuable option--even though they may not actually take advantage of it very often.  It’s comforting to know that it’s there, and they’re likely to complain loudly if it’s threatened.  When India removed some large bills from circulation in late 2016, the result was a months-long national crisis that nearly brought down the government.

My guess is that governments won’t go to the trouble of eliminating cash. What they will do is make cash increasingly less attractive to use.  In southern Italy, where I spend part of the year, the “black” economy is huge--work is done off the books and paid for with cash.  But the Italian government has gradually made it harder to withdraw or deposit even moderate amounts of cash at the bank without paperwork and questions.  (The U.S. has similar bank regulations but for much larger amounts.  For now.)

On the other end of the scale, next year Italy will also stop minting 1 and 2 cent coins. Merchants will still be allowed to price merchandise at, say, €1.99--but you’ll only get that price if you pay electronically.   For cash, the price will be rounded up to €2.  The result is another subtle nudge toward cashless transactions.  

Cash is likely to be with us much longer than many futurists predict.  The real question may be: who will bother to use it?

 

 

Sometimes I joke that we’ve been talking about the Millennial generation for so long, they got old.

Old enough, at least, to start families.  And thus Millennials were a central focus at the Juvenile Product Manufacturers Association conference in California, where I spoke earlier this month. 

The JPMA represents companies that serve the prenatal to toddler phase of parenting--car seats, strollers, feeding, furniture and, increasingly, baby monitors.  And not just baby monitors, but really smart baby monitors.

The “Connected Nursery” was a big topic at the show.  Sleep trackers like the Mimo, integrated into little kimonos or body suits, now connect wirelessly with, for example, your Nest thermostat, so if baby is too warm, the nursery heat turns down.  Or your video monitor will notify you when baby starts to move. Smart scales track and record baby weight precisely. And smart diaper clips will let you know when baby needs changing.

There is even a smart sound machine that detects when baby is stirring and will play soothing natural sound, or lullabies, or project animations on the ceiling...and if all else fails, puts Mom on the line to have a two-way chat.  All, of course, controlled by a smartphone app.

Clearly it’s early days for these devices and physicians warn they’re no substitute for vigilant parents.  There is even suggestion that some of these devices should be approved by the FDA.  But as a trend, it’s inevitable.

And there’s more in store.  Besides watching and enjoying baby, of course, the other new  parent activity is worrying and asking for advice.  Already there are simple applications for Amazon’s voice-powered Alexa that will verbally answer a limited range of parenting advice questions. 

It’s not hard to imagine a future in which an artificial intelligence--like IBM’s Watson or Google’s DeepMind--can be loaded with an encyclopedic body of baby and childcare information.  Mom or Dad will be able to ask aloud, any time, day or night, their crucial questions: “Is this rash normal?”  And get an immediate, authoritative answer.

Or, that intelligent baby advisor in the cloud could monitor the smart scales and other monitors in the “Connected Nursery” so it can answer questions like “Is my baby’s weight normal today?”

The only question it probably can’t answer is whether your baby is the cutest baby ever.  For that, you still need grandparents.  

As a science writer I always looked for stories about the future, so it’s no surprise that I started covering global warming and climate change during the late 80’s.  Recently going through my files I ran across one of those stories, from 1989, that ran on the cover of The Los Angeles Times Sunday magazine.  

What’s interesting now is how clear the science was, even back then--and how relatively uncontroversial the topic seemed.  I wrote about the researchers at Scripps Oceanic Laboratory, who were then arguably the leaders in atmospheric research.  

The Scripps researchers seemed confident that there was still time to slow or even stop the warming trend, as long as society acted relatively quickly.  I think back then, the success in the Seventies of the global community at banning Freon--to prevent atmospheric ozone destruction--was still a recent memory.  Of course the world would rally to prevent an even bigger hazard.

And back then, there were no climate change skeptics for me to quote--something I would have done in my normal science-writing practice.  Certainly there was no one from the fossil fuel industry to quote: back then, their own researchers were also concerned about global warming

What strikes me now about the story is its calm innocence, given how politically charged and divisive the issue has become in the United States. Back then, I don’t think anyone on the science side suspected what kind of opposition waited ahead as the fossil fuel industry moved to protect its commercial interests.  

Ironically, they learned quickly. Within a couple of years two of the Scripps researchers I profiled in the article were deep into the political thickets.  Roger Revelle, sometimes called “the father of global warming”, under dubious circumstances was made co-author of an article that questioned the need for action on the issue.  His young assistant, Justin Lancaster, publicly protested that his professor hadn’t been fully aware of the content of the article and that it didn’t reflect his views. 

Very quickly, an early group of global warming deniers sued Lancaster.  To avoid a lawsuit he couldn’t afford, the young researcher withdrew his statement--although years later, as Revelle’s apparent skepticism was repeatedly cited, he went back on the record.  

But perhaps we should have suspected back then just how powerfully the fossil fuel industry would attack the science. It was, after all, much earlier in the century that another writer, Upton Sinclair, observed that "It's hard to get a man to understand a thing when his paycheck depends on his not understanding it."  

For anyone interested in a bit of scientific nostalgia, a PDF of the article is here.

 

I recently helped with a research paper on the future of “algorithms”--a once-techy term that now generically refers to computer software that uses rules to make decisions. Software, for example, that takes complex data like your financial information and then generates a credit score.  Algorithms now do everything from pricing airline tickets in real time to perfecting long-term weather forecasts.

The basic question of the research was simple: is the increasing use of algorithms to manage society good or bad?  As with almost any technology, the answer, of course, is both--but for the most part algorithms are invisible.

There’s one technique, however, that I think will attract increasing attention: the combination of smart algorithms with computer vision.  

Video surveillance cameras, for example, in airport parking lots, can now be connected to smart algorithms to analyze movement.  They can tell the difference between a traveler returning to their car and someone who is casing vehicles for smash-and-grab thefts.  In the latter case, the software will alert a human security guard who comes out to check the situation.  The system never dozes off or gets bored or distracted.  It’s always watching.

So too are the cameras in a recent retail store application, where the security cameras observe not only the customers, but the employees as well.  Using facial recognition, the cameras track every interaction each sales clerk has with the public.  

At the end of the day, algorithms produce a report on just how the clerk performed--did they ignore customers?  Were they shy about approaching people?  How did their behavior relate to the sales they rang up on the register?  

As the creators of the system put it, the report give managers the opportunity of a “teachable moment” with each employee at the end of the day.  

Or maybe it’s more of a threatening moment. 

A few weeks ago in New York I was interviewed for a History Channel documentary called This is History: 2016.  The show is based on a new Pew Research Center poll that asked four different American generations--from Millennials through the Greatest Generation--to name the 10 most significant historical events of their lifetimes.  

For starters, the differences between generations clearly reflected their ages: for Boomers and the Greatest Generation, civil rights, the JFK assassination and Vietnam figured large.  Gen X and the Millennials, on the other hand, named school shootings like Columbine and Sandy Hook, or terrorist attacks such as Orlando and the Boston Marathon.  

Interestingly, each generation also had its own unique historical recollection not shared with any other generation: the Korean War for the Greatest Generation, Martin Luther King’s assassination for Boomers, the Challenger disaster for Gen X, and the Great Recession for Millennials.

In the end, however, all generations agreed on five key events: JFK’s assassination, September 11, Obama’s election, the Iraq/Afghanistan wars--and the tech revolution.

That last choice is noteworthy, considering that “the tech revolution” is neither a heart-stopping historical moment, nor an intensely emotional national experience.   No one ever asks “Where were you when you heard about the tech revolution?”  Yet technologic change has a place in the national memory that ranks with major social movements and tragic assassinations.

That’s what I talked about in the History Channel interview.  It’s obvious that 2016 will be remembered historically for the rise of nationalistic populism, as evidenced by the Brexit referendum and Donald Trump’s victory.  And there are, of course, lots of explanations for surging populism, ranging from anxiety over immigration to globalism and middle class malaise.

Yet I think that underlying this political upheaval is another chapter in the tech revolution: the oncoming ability of machines and software to replace human labor--from fast food workers and truck drivers to young lawyers and accountants.  This chapter is still on its first pages, but already a broad segment of the population feels the economic earth moving beneath their feet: a vague sense that the fundamentals of labor and employment are changing, and not in a good way for human beings. 

Of course that’s far too nebulous a threat to win a political campaign, so politicians turn to easier targets like illegal immigrants and overseas factories.   But I think history will ultimately remember the populist uprising of 2016 as a distant early warning signal of a much larger economic crisis to come.  And that crisis will, for yet another generation, be among their ten most significant events.     

Someone asked me recently what were the biggest challenges ahead in the HR world.  I spend a lot of time talking with those key figures in business--the people who manage the people--and at the moment I'd say there are three broad areas:

--The soft-skills gap in some younger workers.  Broadly, these are skills involving communications, collaboration, unstructured problem-solving, etc.  I worked with one Fortune 500 firm last year that’s planning a "remedial social skills” course for certain new hires.  

This isn’t the same tired old knock on the Millennials—it’s rather recognizing that technology inadvertently impacts the development of soft skills for at least some in adolescence and emerging adulthood.  It’s a fixable issue that needs to be addressed in K-12 and college, but until that happens, the remediation will fall to employers.

--The challenge of virtual workplaces.  Whether managers like it or not, we are moving to a much more dispersed, partially virtual workforce.  The drivers include the cost of real estate and energy, the burden of commuting (including traffic congestion), and sometimes the preferences of the talented young workers we want.  

I worked with a white-shoe law firm in Manhattan recently who basically promised a partnership to a young woman who graduated Harvard Law with a stellar record.  She turned them down—she’d interned for them one summer, disliked the lifestyle—and said she would take a job, but she wanted to live in Colorado.  The older partners were stunned, but finally gave in.  She works in Colorado, where she skis and hikes, and commutes into Manhattan once a month. 

Of course the physical office is not going away—but it will be more of a place for collaboration than solitary work.  And it will be festooned with telepresence video screens that connect separate offices via always-on “windows”.  Among the challenges for HR: how do you create corporate culture and evaluate employees in a mixed real-virtual workplace?  What are the metrics to determine whether a job or business trip is better handled in the real world or virtually?

The looming issue of white collar automation.  Cognitive computing—the newest evolution of artificial intelligence--is performing many low-level white collar and even professional tasks more cheaply, and often better, than humans.  We already see the impact in services, like accounting and law and advertising, where the entry-level jobs, the traditional stepping stones to full professional responsibility and client contact, are being automated.  What do you do with new workers while they are learning the practicalities of the job?  

But white collar automation will also ultimately strike more broadly, and result in repeated downsizing and restructuring in many sectors.  A key response for HR is to encourage employees toward skills that can’t or won’t be done by computers--and also how to work with cognitive computers in collaborative ways.

The new jobs marketplace.  The aging-out workforce, the shortfalls of our educational system, and the move toward highly specialized job functions, means that by next decade employers may be chasing a smaller and smaller pool of qualified candidates.  And those job candidates may not fully believe in the ability of any corporation to offer them long-term secure careers.  

Taken to its extreme, one could imagine young workers with highly valued of-the-moment skills marketing themselves in an online marketplace in which employers compete and bid up salaries, a bit like professional athletes.  These in-demand employees want to maximize their current payout, knowing that as they grow older they may need to take time off to retrain and re-enter the workforce. 

Last week Farhad Manjoo, the technology columnist for The New York Times, had a thoughtful piece on the death of the early futurist Alvin Toffler, most famous for his book Future Shock.  

Toffler’s thesis back in 1970 was simple: “Change is avalanching upon our heads,” he wrote, “and most people are grotesquely unprepared to cope with it.”

Forty-six years later, says Manjoo, ”...it seems clear that his diagnosis has largely panned out, with local and global crises arising daily from our collective inability to deal with ever-faster change.”  Yet at the same time fewer and few institutions are even thinking about the future in substantive ways.

It wasn’t always thus: in the Seventies, various organizations, such as RAND and SRI worked for the government projecting the future of global politics and nuclear weapons. The Office of Technology Assessment was established by Congress in 1975 to look at the future impact of impending legislation.

But by the mid-90’s, when the OTA was shut down, the idea of futurism was distinctly tarnished.  Says Manjoo:  “Futurism’s reputation for hucksterism became self-fulfilling as people who called themselves futurists made and sold predictions about products, and went on the conference circuit to push them.” 

Alas, too true.  When I began speaking about the future I was most reluctant to use the futurist word.  Having spent twenty years in hands-on work inventing new media, I didn’t take futurists seriously: they often lacked technical understanding, or real business experience (or both).  Too often their predictions veered off into either science fiction or simply what they’d like to see happen.  Futurists became famous for their perennial predictions of flying cars.  (The one below was supposed to arrive in 1967.)

Thus, when The New York Times asked me to be Futurist-in-Residence, I tried to talk them out of that title. I’d been around journalists for a long time and I feared that no one in the NYT newsroom was likely to take someone called a “futurist” very seriously.  But the newspaper insisted, and in the end I decided that when The New York Times wants to call you something, you might as well go with it.   

As it turned out, the title worked.  When there is a “futurist” in the room, it gives everyone permission to untether, at least briefly, from quarterly reports and annual budgets.  The time spent thinking out five to eight years is then very helpful when discussion returns to the here-and-now.  A number of the organizations I’ve worked with in the past few years have initiated real changes in directions and strategy after a few hours of contemplating the world of the early Twenties.

Futurism is not dead; rather, as foresight has left the political process it has instead become more local.  And it's still a very good discipline for organizations and corporations to pursue. 

As another early futurist, Kenneth Boulding, once said: “The future will always surprise us, but we must not let it dumbfound us.”  That's the very least we should ask from our futurists.

Here's a recent interview I did with Speaking.com, one of the leading speaking agency Websites:

 

And another interesting project with The Atlantic:

 

 

 

One morning a few weeks ago three New York City policemen came to my door.  Not ordinary officers, but members of the Counter Terrorism Task Force, working with the FBI.  They wanted me to know that my name and address had just appeared on a ISIS hit list of 3,600 New Yorkers, released on a messaging app under the tag We Want Them #Dead.

Great way to start the day.  However, the officers were quick to say that the FBI didn’t think this was a serious threat--there wasn’t a clear pattern to the names on the list, and some of the information was quite out-of-date. Of course, one said, handing me his card, if you see anything unusual, give us a call. But it appeared to be almost random New York names and addresses picked up from somewhere on the Internet. 

Random?  I asked to see some pages from the list.  By far the most names were from my borough, Brooklyn.  Then I recognized a few neighbors and immediately suspected what had happened. 

Brooklyn may be the world center of worthy causes.  Universal pre-K, ban plastic bags, widen the bicycle lanes--you name it, and we have a group for it.  I’m partial to a worthy cause once in a while, and so are some of my more activist neighbors.  We sign petitions, donate, end up on mailing lists....and in databases.

Many of the worthy causes sooner or later win (or lose) their battles, run out of money, or just fade away.  But sometimes their Internet databases live on, perhaps tended by a volunteer with limited time, perhaps not tended at all.

Aging database software is easy prey for even low-skilled hackers.  I suspect that somewhere among the defunct worthy causes is where ISIS collected their list.  Why did they even bother?  As a kind of psychological warfare, perhaps, as well as a way to get publicity and waste some U.S. law enforcement time. 

But there’s a larger issue here.  For my audiences, Internet security is at the top of everyone’s mind.  Many fear, from the stories they’ve read, that real online security is impossible.  I remind them that most of the big, notorious computer hacks we read about actually used very simple techniques--more often than not, exploiting human fallibility rather than esoteric technology.  Those human foibles range from clicking on links in unknown emails to, well, leaving a database abandoned online. 

The solution is broader than just trying to educate employees; by then it's probably already too late. We need education that starts in elementary school.  We teach kids how to cross the street safely, and that if they leave their bike far from home, sooner or later it’s going to disappear.  It becomes what we call "common sense."  Online security awareness should also be taught from an early age--so that leaving a database of names and addresses untended on the Internet is as unthinkable as leaving for vacation with your front door open.

Pages