Welcome back to Mind Over Money. I'm Kevin Cook, your field guide and story teller for the fascinating arena of behavioral economics.
Yesterday morning, July 25, I made a video titled The Cult of FANG: Investment Fads or Innovation Franchises? For listeners who may not be familiar with this particular stock market “cult,” FANG stands for Facebook, Amazon, Netflix, and Google GOOGL, all stocks that have done extremely well the past few years due to several factors involving technology-driven sales growth, investment popularity, and wide consumer and business user bases.
But my goal was to point out that the most important reason you buy and hold these stocks is that they are technological powerhouses that would continue to dominate in their core businesses, and several new ones they were always itching to start from their secretive R&D labs, inspired by Google’s resurrection of the old Lockheed Martin-style "skunk-works” teams for rapid innovation.
I think this "growth reset" for Facebook is all good. The company has been engaged in a massive campaign to redesign its business model, priorities and computer algorithms that control content and advertising. They are willing to limit advertising growth, and thus revenues, while they increase spending on privacy and security for users. These are all good growing pains for a platform that still has over 2 billion monthly active users.
And I immediately advised investors on Wednesday night to buy the stock under $170 in after-hours trading because I thought the 20% correction had priced-in most of the bad news already.
But on the podcast today, we're going to talk about the deeper and larger issues underneath this failure. Because it highlights two of my favorite topics at the forefront of the collisions between culture and technology: good ole human behavior and the coming explosion of AI.
To me, Facebook's story is all about the first big train wreck with AI, of probably a few more to come in the next 25 years. Don't get me wrong. It was a great and necessary experiment that we just need to learn from, as quickly as possible.
Facebook was instrumental in bringing together my two favorite topics in a powerful and destructive way. Indeed, the combination of good ole human nature and advanced human manipulation algorithms (i.e., AI) was nearly lethal for democracy.
As Old as Religion, War, and Advertising
It wasn't that some AI-driven computer somewhere created reams of nefarious "fake news" and manipulative political division. It was more that advanced computing algorithms could be used as tools by humans who knew how to use ads, posts, groups, and fake accounts to spread their disinformation or their propaganda and, in essence, wage psychological warfare.
Recall what we talked about in my series of episodes on dopamine: human attention, emotion, and motivation are activated unconsciously not only by beliefs and biases but by our need to fit in to some group, somewhere.
How would small groups of humans know how to manipulate or persuade large groups of other humans?
Well that story is as old as religion, war, and advertising.
And the advanced tools that a huge, sophisticated platform like Facebook offered allowed sophisticated influencers to target multiple audiences very precisely with their persuasion, or their choreographed confusion.
So we have big challenges ahead of us with advanced technologies that are loosely grouped under the AI umbrella. Even our ability and willingness to understand and embrace new science and technology is limited by our natural cognitive biases, perceptual and linguistic blind spots, and by our neurological and emotional habits of thought and behavior.
This is the picture I've been building in recent podcasts, like my June episode titled Knowledge, Certainty, Destiny: How to Keep Up with Science & Technology. My goal is to help us see that we can't even talk about new science if we aren't vigorous in seeing through our own biases and habits.
Embracing the Next Brave New World
Few thinkers and scientists understand the challenge with AI better than MIT physicist Max Tegmark, author of the 2017 book Life 3.0: Being Human in the Age of Artificial Intelligence.
I introduced the book last year in What to Do Before the Machines Take Over as another voice on the "advance guard" of technology along with Elon Musk of Tesla TSLA and Yuval Noah Harari, author of Homo Deus.
Tegmark comes across more optimistic than Harari, perhaps because as a scientist who is deeply involved in and understands the AI challenge, he wants to make sure we get it right. I think he's also friends with Elon Musk, who sometimes sounds like the most pessimistic of the 3 in his warnings about AI.
Musk is also on the scientific advisory board of Tegmark's foundation The Future of Life Institute, a volunteer-run research and outreach organization in the Boston area that works to mitigate existential risks facing humanity, particularly existential risk from advanced artificial intelligence, but researchers also address Biotechnology, Nuclear and Climate Science.
Life 3.0 begins with a science fiction story that doesn't seem that unrealistic. A group of computer scientists called the Omega team decide to build a super AI machine that can learn everything humans do.
In the podcast, I share a few paragraphs that describe Prometheus and its first goal of using Amazon's AMZN Mechanical Turk, or MTurk, to make money performing thousands of Human Intelligence Task (HITs) every day. Within weeks, the computer was earning over $1 million per day.
Then, they went on to teach Prometheus to make TV shows, movies, commercials, news, and eventually, politicians.
Prometheus became a global cultural and political machinery that eventually achieved the goal of every AI that breaks out: world domination.
Tegmark tells this tale to make the point that AI's power is both unprecedented and unpredictable. And it's up to us to understand and embrace its potential so that we might have some control over its direction, velocity, and desired goals.
Tegmark wants more of us to be ambitious about what kind of world we could create with AI. Because the alternative -- thinking it is inevitable and out of our control -- leads to a complacency that won't be ready for the future it creates.
A great introduction to his ideas can be found in his first TED-Talk for the book, released on YouTube July 5...
How to get empowered, not overpowered, by AI
Cooker's 5 Proofs That We Are Naturally Irrational
In Dopamine and the Weather, Part 1, I talked about 4 things in life we consistently make go away from us. They were money, success, people and ideas. And in the case of ideas, it’s more that we are skilled at “keeping away” new ideas.
In this episode, I reviewed these concepts and added a fifth thing we repel that should have always been in my “top 4 list” of things we make go away: good health. And I call them "proofs" because the empirical evidence that each is true is overwhelming.
The first 4 "proofs of irrationality" -- how we repel money, success, people, and health -- are mostly about short-term decisions vs long-term planning, instant vs delayed gratification.
My fifth element of non-wisdom is our natural resistance to new ideas and scientific research. This failing seems to be much more complex and multi-layered, tied into beliefs, culture, relationships and groups we identify with.
Recognizing that you "keep away" new ideas, knowledge, and perspectives is even harder to see because beliefs, dogmas, and biases are so personal and not shared explicitly. They are mostly unconscious and so we are not even aware how our own perceptual and attention filters are working to help us ignore ideas that threaten our sense of who we are.
We Do Dumb Stuff, Over and Again
I could argue that distracted driving (i.e., texting or web surfing) with smartphones is evidence of all 5 of my proofs. It's a dumb act that instantly jeopardizes our wealth, health, future success, relationships, and our commitment to learning new things.
We keep new ideas and knowledge away for at least two reasons that I am beginning to theorize about. First, it’s much easier to have a tidy, compact and simple understanding of life and reality that fits well among our identified peer group.
This has been proven by behavioral science about decision making and heuristics. On a certain level, you could say that…
We don't really think for ourselves as much as the group we want to belong to.
My second reason we keep our distance from new knowledge is that we are not actively engaged in the turbulent, exhilarating, exhausting and sometimes stressful pursuit of truth and wisdom. It’s not only a time commitment like learning golf or jiu-jitsu, it’s also more challenging, confusing and potentially overwhelming.
And it could upset our belief systems, values and existing relationships to start questioning everything again like when we were 5 years old, or 15. In short, it threatens our identity, safety, and happiness.
But continuous and never ending learning is the best way to live. Science is the grandest human project. New ideas and research feed our progress and happiness, as a civilization and as individuals.
Scientific knowledge -- the truth about all reality from religion and neuroscience to genetics and economics -- has grown exponentially in the past 30 years. Have you? It’s hard without lots of teachers, books, TED-Talks and the time to explore them all.
Flight training was my intro into behavioral science because everything was focused on overcoming our natural human tendencies toward distraction, laziness, and fear. Planning, preparation and contingency-thinking were their replacements.
And flight training helped prepare my brain to be trained as a short-term trader, one of the hardest jobs on the planet to make a living at. I share that as a sample of some of the personal evidence I provide in my “5 Proofs.”
An Elephant in Your Brain
As wonderful serendipity (and lots of Google searches and back-links) would have it, this morning I discovered an associate of Max Tegmark's who co-authored a book that dovetails beautifully with my current topics of human irrationality as fodder for "fake news" and advanced AI.
Robin Hansen is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He has a PhD in social science from Cal-Tech and dual masters degrees in physics and philosophy from the University of Chicago.
Hansen worked for nine years in artificial intelligence as a research programmer at Lockheed Martin and NASA. He helped pioneer the field of prediction markets, and recently published The Age of Em: Work, Love and Life when Robots Rule the Earth.
That 2016 book is probably why he and Tegmark know each other. A revised paperback was published in June 2018 with 4 new sections, 18% more text, and 42% more citations. I'm looking forward to picking it up after Life 3.0.
But it's Hansen's most recent book that struck a chord this morning with everything I'm writing about lately. The Elephant in the Brain: Hidden Motives in Everyday Life was co-authored with Kevin Simler, a writer and software engineer who resides in San Francisco.
Here's how they intro the book on its website...
Human beings are primates, and primates are political animals. Our brains are therefore designed not just to hunt and gather, but also to get ahead socially, often by devious means.
But while we may be self-interested schemers, we benefit by pretending otherwise. The less we know about our own ugly motives, the better. And thus we don't like to talk — or even think — about the extent of our selfishness. This is "the elephant in the brain," an introspective blind spot that makes it hard to think clearly about ourselves and the explanations for our behavior.
The aim of this book is to confront our hidden motives directly — to track down the darker, unexamined corners of our psyches and blast them with floodlights. Then, once our minds are more clearly visible, we can work to better understand human nature.
You can read more from Robin Hansen on his aptly-named blog OvercomingBias.com.
Be sure to catch my entire podcast for Cooker's 5 Proofs as we prepare to tackle that most controversial of “new knowledge” topics, climate science, in coming episodes.
Disclosure: I own shares of NVDA and FB for the Zacks TAZR Trader portfolio.
Kevin Cook is a Senior Stock Strategist for Zacks Investment Research where he runs the TAZR Trader and Healthcare Innovators services. Click Follow Author above to receive his latest stock research and macro analysis.
Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report
Amazon.com, Inc. (AMZN) : Free Stock Analysis Report
Facebook, Inc. (FB) : Free Stock Analysis Report
Alphabet Inc. (GOOGL) : Free Stock Analysis Report
Tesla, Inc. (TSLA) : Free Stock Analysis Report
NVIDIA Corporation (NVDA) : Free Stock Analysis Report
To read this article on Zacks.com click here.
Zacks Investment Research