Artificial Simulations and Writing Your Own Eulogy (With Whiskey)
Artificial Simulations and Writing your Own Eulogy (with Whiskey)
Host: Matt Hanham
Drink: Monkey Shoulder Blended Scotch Whiskey
Welcome back, Simpletons! For another installment of the Simple Minds Podcast.
On host duty this week is Magic Matt Hanham. With him, is a bottle of whiskey he received from a friend as a gift, and like the good friend he is, has decided to regift it to the podcast in a thinly veiled attempt to not have to pay for new alcohol.
“So as if things couldn’t get any worse, we’re going to talk about philosophy.” - Matt
Today’s topics have been brought in by Michael and Conrad, who try to discuss the subjects of artificial simulations and intelligence as well as the motivational tool that is writing your own eulogy! (This is going to be a fun episode, we promise. Don’t leave. Please.)
“Simulation theory and whiskey, just need a spliff and we’ll have the Joe Rogan Podcast!” - Michael
Things get pretty intense in this session so without further ado, let’s get into it.
This free round is on Matt Hanham with the support of Michael Duncan, Justin Bourn, Jacob Moffit and Conrad “The Dark Knight” Francis. The Simple Minds Podcast unravels topics such as personal development, philosophy, life and business - one simulated drink at a time.
Listen on: Apple Podcast | Spotify | YouTube | SoundCloud
Artificial Intelligence and Living Inside of a Simulation (4:25)
Michael starts things off with a quote by French philosopher and all-round cool guy - Rene Descartes:
“I suppose therefore that all things I see are illusions; I believe that nothing has ever existed of everything my lying memory tells me. I think I have no senses. I believe that body, shape, extension, motion, location are functions. What is there then that can be taken as true? Perhaps only this one thing, that nothing at all is certain.” - Rene Descartes
Michael asks the guys their thoughts on the idea that we, like the plot of the Wachowski Siblings epic cyberpunk adventure The Matrix (1999), are living inside of a simulation.
Justin jumped in first, having some experience on the matter with his background as a specialist in Virtual Reality (VR) systems for visualising architecture. Justin articulates that whilst hardware for this sort of scenario would be feasible at some point and could be constructed by humans at some point in the future, the sheer complexity of the software involved would require an Artificial “Super Intelligence” to design it, as this would go far beyond the capabilities of the human mind to fully comprehend.
This leads to Michael to reference an essay by Nick Bostrom who argues that this sort of scenario is entirely possible and could result within the next thousand years.
This idea of Artificial Intelligence overtaking human intelligence is often described as the “Technological Singularity”, which refers the estimated point in time that Artificial Intelligence will equal Human intelligence in complexity and capacity, which would then result in an exponential rise in both technological advancements as well as the further ability of AI to improve upon itself through continuous self-replication. Often this idea usually ends with the conclusion that such a “Technological Singularity” would ultimately result in the extinction of the human race as a whole, being unable to compete for survival against an almost infinitely superior intelligence.
Just so you know, the date for that “Technological Singularity/End of all Humankind” is 2045, with desktop computers estimated to equal human intelligence in raw processing power by 2029.
Yeah.
Anyway, the guys start discussing the idea of what humanity, in whatever form that would become, would be able to achieve in given the idea that such a Technological Revolution were to not take place. How would humanity thrive in these circumstances?
For nerds like us, it’s worth mentioning briefly the idea of a Kardashev Scale. So hold onto your butts, because things are about to get trippy.
Developed by Soviet Astronomer Nikolai Kardashev, the Kardashev Scale details the levels at which a civilization could would develop on a super massive scale. There are three levels:
Type 1 - Planetary: A civilisation utilises all the resources available to it on the planet it originates from. (We’re on this level.)
Type 2 - Stellar: A civilisation starts utilising the energy of it’s local star through massive super structures. As well as the energy of the surrounding cluster of stars. (Google Dyson Spheres)
Type 3 - Galactic: A civilisation begins utilising energy on a galactic scale, using black holes and uber-stars as resources.
“We don’t do drugs on this podcast” - Conrad assuaging your obvious concerns
This brings us to an even more weird question. That question being, and prepare yourself, what if we were currently living in a simulation? And what do we do with this kind of information?
This, unsurprisingly, leads to a conflict of ideologies between Michael and Conrad. For Michael, just thinking about the possibility of this idea is worth his time, whereas Conrad feels that this sort of knowledge, as with all knowledge, should be used as a source of action, otherwise it risks becoming pointless.
The guys start getting into things about some stuff regarding whether feelings and emotions are or are not, in fact, thoughts.
“Conrad doesn’t feel pain.” - Michael
Jacob starts talking (Yay! He needs the encouragement.) and mentions the idea of Leonardo Da Vinci exploring this very idea, which is admittedly pretty fucking impressive considering computers weren’t even a thing for another couple hundred years after Da Vinci died.
This leads to a different thought all together (things are moving quickly in this episode, try to keep up), that thought being whether there is validity to the idea that we could one day upload our consciousness to a computer for the future. Would you do it if you were given the option? For the guys, it sounds there’s an almost unanimous answer among the guys - a resounding yes, we’d all totally do it. Just imagine, an eternity of digital, self replicating and hyper intelligent Jacobs, or Matts or Justins. (Conrad’s too, but we try to not think about that terrifying reality.)
“Consciousness is the software that runs on the hardware that is our bodies.” - Jacob
“I am but a mere Michael” - Michael
“I’m Batman” - Conrad
In conclusion, even if we’re living in a simulation designed by our Artificial Overlords of Doom and being used as living biological batteries to power their nefarious needs/desires, this doesn’t change the fact that, as a principal we should use our time wisely and make sure that we take as much action as possible within the simulation and continue to strive towards self-improvement and action.
Writing Eulogies and taking action (45:49)
This leads to our second, lighter subject of the evening - writing our own eulogies. (Yay? Yay!)
Conrad introduces the guys to the, totally not morbid at all, idea of writing our own eulogies and then using those as tools to further motivate ourselves towards putting thought or our ambitions into action. This is framed as a question - Would you be happy with what was written about you in a eulogy of yourself and your achievements?
“Fucking oath I would.” - Conrad
To elaborate:
- Imagine yourself dead (You can choose how you die, have fun, be creative.)
- Write a eulogy of yourself, detailing your life.
-Think about who you’d like to read it. (Pro Tip: Always pick Morgan Freeman).
Conrad further articulates the idea, arguing that the process is a useful tool to further contextualize your own achievements and how you can move towards realising goals you haven’t yet achieved. Conrad also tells the guys that he’s done this practice several times, believing he’s improved each time as a person (as well as getting pretty fucking good at writing eulogies).
Justin chimes in on the subject as well, using his experience of recently writing a eulogy for his late aunt, who passed on earlier in the year. In this experience, Justin found that the idea of who a person is can be greatly expressed through a well-written eulogy, believing that these pieces are useful tools to help celebrate a persons life and bring to the fore their lives in a more personal and emotional context.
Justin also mentions the idea of Gary Vaynerchuk, whose attitude towards self-improvement stems from a similar place, believing that his actions should always help towards his further legacy or “how many people will attend his funeral”.
This gets further connected to the philosophy of famous Businessman/Inventor/Industrialist and real life Iron Man - Elon Musk, whose plans for the successes of his business go far further than what he expects to be his own lifetime. Which brings us to the question of whether we should be doing a similar thing ourselves. Interestingly, this idea isn’t too far removed from what was spoken about by Sam Cawthorn in last weeks episode, who emphasises the idea that businesses should be planned in a similar way to how we plan for our children's futures, with a clear idea of how we hope things will go for the next ten to twenty years. Both Musk and Cawthorn have argued that this long-term philosophy is key to giving agency to our actions and give a greater chance for our ambitions to be a reality.
This leads to the guys finally asking the question we’ve all been dying for: What did Conrad write for his own eulogy?
Conrad won’t say much, but he did mention he wrote he would die in sleep with a heart attack. Beyond that, he was unusually coy. Thankfully, Matt manages to give us an idea.
“Conrad, loving father, friend to many, man of action. Batman.” - Matt
The guys start getting off topic by this point, but Conrad did mention he journalled about this subject. Unlike the eulogy, we did manage to get a preview of what he wrote. Enjoy:
Subscribe to the Simple Minds podcast for more cute journaling tips.
Grab a drink and check out our Facebook and Instagram. Give us your hot take on the topics discussed in this episode.
Where do you think artificial intelligence is going? Have you written your own eulogy?
Mentioned in This Episode