Lifestyle Wellness
11 Recent AI events that changed people’s life plans?

1. As a part-time translator, I informed one of my clients that I had acquired the capacity to communicate in a completely new language that I had not previously known. I demonstrated this ability by inputting their complex translation exam into the ChatGPT, which it successfully completed. As a result, I now accept any translation jobs they offer me, earning an additional $2,800 last week alone. Although it may not be a sustainable strategy in the future, my goal is to remain employed during any potential layoffs by seizing as much work as possible.

2. “Quit my job. I’m going to go all-in, if the market will allow. If this is “only” as transformative as the Web or the Cloud then I want to be in early. If GPT-4 is internet explorer 1.0, or Salesforce version 1 or AWS version 1.0 then there is a lot of money to be made. Also fun to be at the forefront of the big technological shifts.
I also think I will be in a better place to evaluate the risk if I am closer to it.
It does make me very nervous about the long term, but all the more reason for me to want to be expert at it.”

3. “I’ve lowered my retirement contributions by ~30% because I’m not sure they’ll be relevant by the time I would want to draw on them, and so I should be trying to get some value out of that money now. But I’m not getting rid of them entirely because I’m not sure they won’t be relevant.Really just that though. If you’re getting anxious or angry, probably you’re spending too much time reading about this stuff. Even if it turns out the world is ending in 5 years, being anxious and angry isn’t how you want to spend those years.”
4. “I am strongly considering quitting my job and taking a 6 month sabbatical to train up on transformers. Worst case scenario I fall back into my current field, middle case I learn useful skills and pivot into something that will use them, best case scenario I’m able to contribute in some small way to interpretability / am able to find sponsorship for independent research.
In terms of personal long term plans I haven’t had big shifts. I still want to start a family, and see any future I want to be a part of as necessarily human at its core. What those kid’s lives will look like however, I see as a titanic shift. Schooling as I knew it growing up I expect to be obsolete, replaced by some sort of 1:1 AI tutoring. Potentially in some sort of age cohort setting to establish social skills, but I don’t see the school system keeping up at any reasonable pace here. At the least, I think it’s likely that my future kids will have an AI companion of their own that will assist in their development as soon as they can talk. Every generation of parents in recent history has to deal with the contours of technological change, but I see this as a likely greater shift than had ever happened before.
In general I think the future is going to be a wild ride, but it is still steerable. I’d like to help steer, and if not then catch the wave.”

5. “I thought the prospect of AI making my job in medicine (partly) obsolete was decades away. But someone could come out with an AI that does half of my job better than I would ever be able to – THIS YEAR – and it would not be sci-fi. The only obstacle for putting it to work would be law, momentum and the pushback of all colleagues everywhere. I’m expected to spend an incredible amount of effort on further education, most of which will (not now but eventually – timeframe uncertain) become a waste of time. What was a rock-solid long-term career plan 8 years ago now feels like living on borrowed time. On the plus side, GPT made my hobbies (programming, writing) significantly easier, faster to do and more fun. I always joked about switching to IT, but I stopped, because it feels like I’m not joking anymore, and anyway, I don’t know if it’d help.”

6. One anecdote which may cheer you up: some of the United States’ top minds spent a great deal of time at the RAND Corporation in the 1950s thinking about nuclear war. They didn’t need to anticipate the actions of an alien superintelligence, just what a bunch of humans living in the USSR would do. Upon evaluation, they were positive that a nuclear first strike by the USSR was not just possible but (as the rational thing to do) almost certain to happen. Some of them were so certain of their assessments that they didn’t bother to contribute to their retirement plans, as they wouldn’t need them after our inevitable nuclear armageddon. (Kaplan, F. (1983). The Wizards of Armageddon.)
We’re still a ways from an apocalyptic scenario. For example, “fast take off” or “FOOM” requires recursively self-improving AI. So far we practically don’t have self-improving AI at all. Not even to the point that we can point $1 billion in computing power at the problem and have it produce the same results as $1 billion in computer programmers given the same task. At the moment, I don’t believe that capability has been demonstrated as more than a toy. Until at least that is possible we haven’t even started to climb the ladder yet.
I do expect society will change a lot if AI continues improving at the rate it has. But while AI doomers do have some compelling arguments we’re (for better or worse) far from proving that the AI doom scenario is feasible. For a long time, people believed that a computer capable of playing Chess would necessarily be capable of intelligent thought. In the 1960s, researchers called chess-playing the “drosophila” of AI – the idea being that creation of a sufficiently powerful chess playing computer would give us insight into the basis of a mind. Nowadays, I would think that anyone suggesting that people trying to make Stockfish faster be bombed before they destroy the world would receive a frosty reception. But likewise, while LLMs seem promising, so far there’s pretty limited evidence that improvements in LLMs will result in agentic superintelligence.
To respond directly to the question – I am still planning to FIRE (very soon). I still expect that civilization won’t end in the next 5-10 years. I’m planning to try to transition into a post-FIRE situation where I can work on applications of AI as I think AI and biotech are the two most promising areas to work on right now. I tend to think that in the long-run either my broad-market investments will do very well as capital replaces labour and if an extreme scenario happens (either very good or very bad) I probably won’t care either way. So basically I think AI is very important but not likely to be civilization-ending.

7. “ I’ve reduced my overall investments, while making sure I was invested in every Tech company likely to get AGI. Either AI will usher in a golden age where I won’t need as much money, or it will kill us all and I won’t need any money. The one future I haven’t figured out how to hedge against us if China wins the AGI race. I can’t figure out how to invest in China in a way that would be fine if they use AGI to expand hugely. Plus I have a pension from the US Federal Government, that could become useless, in the worst case scenario. That said, I do think China’s ascendancy is the least likely of those three options.”

8. “It’s left me feeling uncertain and anxious. Work as a software dev, and definitely see how this can impact tech jobs, hell, pretty much all white collar jobs. I have no idea what the future holds, but starting to think about alternative career paths to go into if there is a radical reshaping of the economy. Maybe nursing, maybe becoming an electrician/plumber, or maybe some engineering that needs to be on site a lot. Not too big of an issue for me if we do need to retrain a lot of society – I’m young, got no real debt, and pretty flexible in what I work on.
If some malicious AI takes off and kills humanity then we’re all screwed, lol. So I’m not gonna even bother worrying about that. Just about the things where I may need to make a decision.”
“It has made me feel increasingly more hopeless and angry at a lot of bad life decisions and addictions. I am too stupid to rightfully belong to this sub and am not sure what I am going to do. The only actions I have taken are stocking up on books to try to change my mindset and beliefs to (hopefully) move forward instead of lingering in bitterness and depression.
Current situation and beliefs:
I am not smart enough to quickly or even slowly pivot to a new field. After 8 years of working in accounts payable I hate it so much but dread forced obsolescence. I should have known accounting was the wrong move since I hated those classes and was bad at them. But I thought accounting was good information to have and a get rich quick degree so I could FIRE quickly enough. None of that worked out at all. I am debt free but am not even close to a downpayment on a house.
I daydream of going into computer science. But I believe that is also destined for automation so I am resistant and hesitant to study for years only to discover there won’t be a payoff. My average IQ and terrible memory(former alcoholic) will limit my ability to program or anything similar at a level higher than what AI will be able to do when I get there.
Currently (attempting to read) The Myth of AI to counterbalance my terrifying beliefs of inevitable economic obsolescence as AI outperforms any tasks I could possibly hope to achieve. If that makes me feel better I will read The Brain that Changes Itself and Moonwalking With Einstein in the hopes that I can convince myself that my brain can improve, heal, adapt and that the decade of extremely heavy drinking hasn’t left me a useless and unteachable human. If all that works out I’ll hopefully do the core Comptia certifications since that will take me a year(ideally) and then I can have that in my back pocket in case my current accounting job disappears, or possibly even jump to an IT job and escape accounting. If that goes well I will do the OSSU computer science self study (2-3 years). If that goes well, probably go for the post-baccalaureate in computer science from Oregon State (2-3 years). By that point I’ll be in my early to mid 40’s but hopefully looking at jobs I enjoy and will be paid enough to afford a motherfucking house(even an apartment). Sucks that it will have taken me 20 years to be at the same level of a newly graduated CS student who didn’t fuck their life up and didn’t squander decades of their life to addiction and depression.”

9. “I am tremendously excited about this moment in time. One way to think about this is that we are all going to have a tremendous time of change. Now it is easy to see the doom and we should keep an eye on those dangers. It is also a time of tremendous possibility to the upside.
It is possible that a near term outcome of AI is low cost farming robotics that produce food that is low-cost locally on small pieces of land. Imagine as well that we keep leaning hard to get cheap clean power. With those two building blocks, the future starts to look very rosey.
When I start to think about AI just solving many fundamental needs, I start to think of us in a post capitalist society. I don’t mean we move to a collective ownership thing. I just literally mean that the use of capital as a mechanism to allocate things doesn’t make much sense because most things are not scarce.
Imagine most people in tech are not working towards Terminator they are working towards a better future and mostly they produce more of the upside than the downside.”

10. “I’m feeling incredibly uncertain. I’ve gone from being a total AI sceptic to being fairly confident that LLMs will radically transform or eliminate my job in the next 5 years. While I’m not super worried about financial precarity for myself, it’s opened up all kinds of existential questions about what I want to be doing with my time that I don’t know how to answer. In this way, it might ultimately be a good thing, since working a white collar job until you die “just because” is kind of horrifying in its own way, but it’s making me incredibly unsettled and uneasy. As an aside, I think people here are way too focused on existential risk rather than all kinds of ordinary social collapse that might happen from the elimination of knowledge work.”

11. “I never had any life plans, so no. It’s been just over half a year that I’ve been getting out of a major depressive break where I was close to (actual literal I absolutely mean it) insanity. I’m almost incapable of feeling negative emotions right now because the antidepressants are working exceedingly well. I spent the last few years mentally suffering so much I just wanted it all to end and my empathy went away and doesn’t seem to be coming back. I’m not capable of panicking about AI or anything really, though I do think we’re pretty close to AGI, not likely to align it and are fucked as a result. Timelines? >20% chance of human extinction within the next 15 years. >95% within the next 50. Better odds than I was giving myself a year ago when I very close to pulling the trigger on this whole “being alive” thing.
Just trucking along studying, trying to live my best life. Still working on making friends and, now that I’m mentally stable, maybe thinking about relationships. My hobby and what I eventually want to work in (botany) is not at risk of being automated away. I don’t have aspirations of greatness either. My student job of working as a cashier in a small corner store that doesn’t take payment by card (and never will) is secure. My plants are growing nicely. I don’t really care if humanity dies. It’s frustrating that most people can’t seem to get the imminent danger we’re in.”