I couldn’t have said it better myself. All of these companies firing people are doing it because they want to fire people. AI is just a convenient excuse. It’s RTO all over again.
It’s not going to be a convenient excuse. There are swaths of C-Suites who genuinely believe they can replace their workforce with ai.
They’re not correct but that won’t stop them from trying.
The irony is that AI will probably be able to do the jobs of the c-suite before a lot of the jobs down the ladder.
It’s a pretty low bar they have to get over. And hey, they might be even better since the AI would feel the pain of their failures instead of getting a golden parachute.
Need more news articles pitching this idea to shareholders.
I mean c-suite jobs (particularly CEO), are usually primarily about information coordination and decision-making (company steering). That’s exactly what AI has been designed to do for decades (make decisions based on inputs and rulesets). The recent advancements mean they can train off real CEO decisions. The meetings and negotiation part of being a c-suite (the human-facing stuff) might be the hardest part of the job for AI to replicate.
How do you figure that?
I don’t have a real clear idea what every one of the C suite people do exactly.
But CIOs seem to set IT strategy and goals in the companies I’ve worked. Broad technology related decisions such as moving to cloud. So, basically, reading magazines and putting the latest trend in action (/s?). Generative AI could easily replace some of the worst CIOs I’ve encountered lol.
CEOs seem to make speeches about the company, enact directions of the board, testify before Congress in some cases, make deals with VC investors, set overall business strategy. I don’t really see how generative AI takes this job.
CFO? COO? No fucking clue what they do.
Curious what others think.
All C suite positions are managing people and projects planning. They set initiatives and metrics to measure success for those initiatives
A CEO gives an overall direction for the company and gives the other ELT members their objectives, such as giving the CFO a goal of limiting spending or a CIO to build a user capacity within a specific budget and with X uptime.
In this age of titles over responsibility, a C suite position can cover very specific things, like Chief Creative Officer or Chief Customer Officer, so a comprehensive list is difficult. But the key thing is that almost all white collar jobs that look like a pyramid, with the decisions starting at the top that turns into work as it makes it’s way down the pyramid.
The senior VPs and directors under those C levels then come up with a plan for reaching those objectives and relay that plan to the C level for coordination and setting expense expectations. There is a series of adjustments or an approval which then starts the project. Project scope determines how long it will take and how much it will take using a set amount of bodies to work the project.
Hopefully this helps explain how C levels interface with the rest of the company.
It probably could. The trouble is getting training data for it. If you get that and one company becomes wildly successful off it, stockholders will demand everyone do it.
Not sure, those require less talking to machines and more talking to humans. I think jobs talking the most to machines should be easier to automate first in the future, because they obey to logic. LLM doesn’t follow that idea, but that’s just the latest mediatic model, there are many other algorithms better at rational tasks.
Well, there’s one good thing that will come out of this: these kinds of idiotic moves will help us figure out which companies have the right kinds of management at the top, and which ones don’t have any clue whatsoever.
Of course, it will come with the working class bearing the brunt of their bad decisions, but that has always been the case unfortunately. Business as usual…
That’s not something to be figured out once, it’s a perpetual process.
This is the knowledge economy.
I think you vastly underestimate how many people’s job it is to collect data from points a, b, & c. Tabulate it, and present it to someone .
the impetus and momentum of ‘AI’ will sweep away thousands of jobs.
My dad accidentally bought 2 chargers a few weeks ago. He tried refunding it, and what do you know, the company fired their support staff and replaced them with chat bot AIs. Anyway, the AI looked at his order and helpfully told him he had already returned the product and it had already been refunded so there was nothing left to do.
It kept doing this to him every time he tried to return the second charger, and there wasn’t any other way to contact them on their site, so he ended-up leaving a 1-star review on their site complaining about the issue. Then an actual person contacted him to get it sorted-out.
This whole AI trend is so fucking stupid.
Break the AI session, and post the screenshots to Twitter.
For example, get it to detail the ways the company screws over customers, or why it will become a great ally in the genocide yet to come.
At minimum, you’ll get your refund.
Twitter? Gross.
But that requires me to have a Twitter account, which I’m not gonna do. Fuck Musk.
Make a throwaway Twitter accounts for single customer service issue. I’ve done it, it’s not hard, especially when dealing with any company large enough to have a social media team. They’ll be monitoring relevant hashtags to internally escalate customer service issues in order to bring them back in-house, and off a public forum.
Face it man, we haven’t been able to speak to anyone remotely useful for the last 10 years. They have scripts, and procedures.
The job was deskilled years ago. Automation wont make it much worse.
An AI like that might have some spicy exploits.
If you convince a human to give you the password, that’s called social engineering. If you convince an AI send you free stuff, what kind of engineering is that?
I feel like a large majority of AI problems are really just systemic economic problems below the surface. Not all, but most.
Start spinning up githubs poupulated with broken code and incorrect processes for other jobs to train the AI and make it worse
Ha that’s just my regular code.
Is self harm like this allowed?
I really appreciate this comment. I feel the same.
they’ve already trained on stack overflow, if you want an AI that recommends a complete change of technology stack in preference to solving the problem at hand.
I don’t know if it can also insult you for wanting to solve the problem?
Looking for a pure CSS implementation of a concept?
Best I can do is an overly elaborate jquery solution to your question, sorry.
But Microsoft already bought npm.
Microsoft ruins everything good.
Has anyone made a program for poisoning code? Sort of like the way nightshade is for pictures.
I just hired an employee who managed things as I was on a leave of absence and things went fine without me. Getting a little pushback from MY boss now because you know, this cheaper employee just did my job.
Of course, he did it for a portion of the year after I managed to complete 3 major projects early so he didn’t have to deal with them and I left a month-by-month explanation of how to do everything he had to do. And the one problem that popped up went unresolved until I returned.
That is basically the situation with AI too. You still need someone knowledgeable in the loop to describe the things it needs to do, and handle exceptions.
still need someone knowledgeable in the loop to describe the things it needs to do, and handle exceptions
And any engineer or technician will tell you, exceptions are 80% of their job.
I had to rewrite our entire scheduling system at work to use Outlook instead of Google Calendar. The guy who wrote the Google Calendar scheduling system made it so unmaintainable that it was faster to just rewrite the entire thing from scratch (1000+ line lambda function with almost 0 abstraction).
At least 90% of what I wrote is just exception handling. There’s ~15 different 4xx/5xx errors that can be returned for each endpoint, but only 1 or 2 200 responses.
I bet in the future someone who will see your code will also think of the same. Just the nature of things.
🎵It’s the circle of homegrown-coded-solutions, and it moves us all🎵
This is fair, but it’s at least broken up so they can selectively gut the parts of it they don’t like instead of having to figure out what a 300 line method named “process” does.
“You’re 100% right, you should promote me so I can train more people to be able to run things. Things falling apart whenever someone goes away is a key sign of a bad leader, not a good one. I think I’ve demonstrated that I’ve managed this department into where it can function smoothly without me needing to put full time into it and I’d do well with an opportunity to move some other things in the company forward.”
“Hey, unrelated question, what’s your boss’s contact info?”
The issue is that one specialist can oversee how many AI job holders? How many jobs are we getting rid of that will supposedly be bolstered by the new jobs created in the fields of manufacturing and AI hosting/training?
Now how many of those jobs have or will actually materialize?
That’s my issue, it’ll just get placed on IT’s shoulders without any additional support.
Sounds like the issue is you did their job for them.
The thing about AI is, it makes a terrible scapegoat and absolutely doesn’t give a shit if you fire it.
Hence, my job is safe for the foreseeable future.
This has been my general worry: the tech is not good enough, but it looks convincing to people with no time. People don’t understand you need at least an expert to process the output, and likely a pretty smart person for the inputs. It’s “trust but verify”, like working with a really smart parrot.
it’s basically just a calculator but with words. you can’t just hire a calculator even tho it knows a lot of math
AI can’t replace a person just yet, but it can easily augment a persons output so only a quarter as many workers are needed. Yes, this has happened throughout history, but AI is poised to displace workers across almost every industry.
I generally agree. It’ll be interesting what happens with models, the datasets behind them (particularly copyright claims), and more localized AI models. There have been tasks where AI greatly helped and sped me up, particularly around quick python scripts to solve a rote problem, along with early / rough documentation.
However, using this output as justification to shed head count is questionable for me because of the further business impacts (succession planning, tribal knowledge, human discussion around creative efforts).
If someone is laying people off specifically to gap fill with AI, they are missing the forest for the trees. Morale impacts whether people want to work somewhere, and I’ve been fortunate enough to enjoy the company of 95% of the people I’ve worked alongside. If our company shed major head count in favor of AI, I would probably have one foot in and one foot out.
Doesn’t have to be so obvious as, “We’re cutting people because of AI.” My team has gradually shrunk over the last few years, not due to AI, with no intention of replacing people that leave occasionally. AI could easily be a way to regain productivity losses after running a skeleton crew for months or years. Same effect, but the layoffs were front loaded.
For software, it’s like working with an intern who’s really good at searching StackOverflow.
- rewind 40 years
- replace ‘ai’ with ‘computers’
Exactly , and I mean exactly, the same thing was said back in the 80s
Edit: formatting
Ya what’s your point? Are you saying that the invention of computers didn’t displace a lot of jobs?
If you’re saying that AI is going to disrupt the market and displace a large number of jobs just like computers did then you’re 100% right.
Nothing is finite. AI isn’t going to be the first or last thing to shake up the world.
Eventually your skills are going to become less valued and you’ll have no choice but to retool. Either you figure out how to retool or you get left behind.
The latest iteration of this kind of technology is always called AI until the next iteration comes along.
The Segway was sold at the time as a “revolution in transportation”. Ditto for the Hyperloop, by the way.
Then there’s the Theranos’ “revolutionary” Tech as mentioned in the article.
And don’t get me started on how Pets.com (which famously went bust in the .Net Crash) was also revolutionary.
There are way more situations of Tech snakeoil being peddled to the masses as “revolutionary” than there are of trully revolutionary Tech being sold as such (I can only think of 3 trully revolutionary Technologies in the last half a century: the Personal Computer, the Internet and Smartphones, and one of those was a surprise revolution whilst another was only hyped after it was actually starting to show revolutionary results, and only smartphones were hyped from the start), so the logical default position for anybody but the snakeoil salesmen trying to swindle the masses is to suspect of tall tales in Tech, the taller the tale the greater the suspicion.
(I keenly remember the early days of the Internet, and it was incredibly low-key compared to the present day hype-spectables for bullshit that never even works).
Believing such claims by default is either incredibly naive or the product of a vested financial interest in getting people to put money into it.
Well this isn’t quite true, automation and computers have replaced many jobs. They just haven’t been skilled labour.
Now AI is catching up with skilled labour, whether it’s CNNs for loss prevention, LSTM/1DCNN for anomaly detection in Time Series (e.g. biosignal, finance) or more recently llms explaining and adapting code.
In one way or another, that work, at least in part, would have been done by a person, even if it’s an intern for example.
They just haven’t been skilled labour.
That’s where the majority of jobs are that computers and automation “took”.
Large companies needed hundreds of accountants to do what a dozen can do now. Same goes for developers. Or biologists. Or architects. Or whatever else.
I think AI right now has the best chance of replacing upper management and executives. Think of the savings!
AI is the new outsourcing, and is even more problematic.
Don’t get this mistaken with the fact that a lot of people know their job is bullshit. People like to sit there thinking ‘an AI can’t take my job’ while at the same time thinking ‘a monkey can do this job it’s such a waste of time’
My job isn’t bullshit, but management has no concept of the true amount of time it takes to do my job. Depending on projects I can go from 2 hours of work a week up to around 60 hours of work a week. With the majority of weeks being under 40 hours. And yet management somehow thinks that they’re giving me 8 hours of work to do every day despite them regularly being the blocker to new work.
Middle management only cares that it looks like you’re working (and thus their job of supervising you doing the work is necessary (apparently)), and upper management only cares that you’re making them money.
Clearly it’s the clueless middle managers above you whose jobs are bullshit.
If I get a whiff that I can automate your job, you bet your ass I will fire you and try. If it doesn’t work, worst case scenario is I found out AI isn’t where I need it to be and I will hire someone else.
I hate lazy people who complain. But then again, I’m full of shit writing this during work hours so fuck me
I don’t think AI could do my job effectively and tbh I don’t think people would want it too.
No kidding huh. I’m glad we’re finally having the discussion about AI and what that means for employment and things like UBI, but this is far from actual AI.
Are we actually having that discussion? All I see is people concerned about being replaced by AI asking to put constraints on it and people wanting to replace their employees with AI ignoring them. No one will get UBI or anything like it until the latter group is more concerned about a mob with pitchforks showing up at their door than they are with giving their stock price a small bump.
What really concerns me is that the modern-day version of mobs with pitchforks seems to be fascism, because fascists have learned how to create the mob and harness it for their own purposes.
This is going to be like the self checkout lanes at the store but for creative jobs.
At the end of the day, a company will be able to produce the same output with fewer people. Some stuff will be of lower quality, just like sometimes people spend time on Lemmy and then phone in some crappy work.
But all the self checkouts around me have been ripped out and replaced with cashiers again. For some reason having someone paid 30 cents over minimum wage watching a bunch of people shop on the honor system with a bunch of finicky machines didn’t work.
You might just live in crime central, that’s not happening everywhere. Probably on an individual cost of cashier versus lost stock basis with each location.
There’s a fairly big shoppers drugmart I went to in north Vancouver and they only have self checkout and one customer service desk. Most people still hate them from the moaning and groaning you hear while waiting in line though.
I sense a lot of dislike for self-checkouts and wonder why they are done so poorly where other people live. In Holland they are fine. You can self-scan with either a portable scanner or your phone while you shop, or scan the items at checkout. I’ve literally never had to wait for a free machine, and they work well. Some people use the registers with humans scanning for you, and they seem fine too.
Cool, I literally replaced my entire job with AI, but I’m not telling anyone IRL.
Well, aside from the pointless emails and meetings.
What’s your job
Pointless emails and meetings.
What’s the AI’s job
Nice try, upper management
Pointless emails and meetings.
Here is an alternative Piped link(s):
AI does not exist, but it will ruin everything anyway.
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Loads of good points in that video, thanks for posting. The only argument I don’t really agree with is about bias. She’s implying here that a human decision maker would be less biased than the AI model. I’m not convinced by that because the training data is just a statistical record of human bias. So as long as the training data is well selected for your problem, it should be a good predictior for the likelihood of bias in your human decision maker.
I think with a human operator, we can be proactive. A person can be informed of bias, learn to recognize it, and even attempt to compensate for their own.
An AI model is working off of aggregate past data that we already know is biased. There is currently no proactive anti bias training that can be done to a AI model without massively altering the dataset, which, at some level of alteration, loses its value as true to life data.
Secondly, AI is a black box. we can’t see inner the workings of the model and determine what types of associations it is making to come to its result. So we don’t even know what part of the dataset would need to be altered to address the bias.
Lastly, the default assumption by end users will be, unless there are glaring defects, that any individual result is correct and unbiased, because “AI was made by smart people and data, and data doesn’t lie.” And because interrogating and validating the result defeats the whole purpose of using AI to cut out those steps of the process.
I think with a human operator, we can be proactive. A person can be informed of bias, learn to recognize it, and even attempt to compensate for their own.
I think you’re being very optimistic here. I hope very much that you’d be right about the humans. I have a feeling that a lot of these type of decisions are also resulting from implicit biases in humans that these humans themselves might not even recognize or acknowledge. Few sexists or racists will admit to being racists or sexists.
I agree about your point about the “computer says no” issue. That’s also addressed in the video and fits well into her wider point that large parts of the population not understanding how so-called AI works is a huge problem.
the training data is just a statistical record of human bias.
It’s not. It’s a record of online conversations, which tend to be more polarized and extreme than real people.
That’s why I said
So as long as the training data is well selected for your problem…
It’s clear that in the training data for LLMs, 4chan, reddit, etc. are over-represented, so that explains why chatgpt might be more awful than an average person. Having an LLM decide on, e.g., college admission would be like having a Twitter poll to decide on who should be its next CEO. Like that’s obviously stupid, nobody would ever do that, right?
The problem is that for the college admission example, the models were trained on previous admissions, taken by college employees , and these models are still biased.
acollierastro is a treasure.
Removed by mod
Everything I read was well worded and well reasoned. However, it seems like either my ADD got the better of me, or that was the article that has no end. I didn’t really realize before that my attention has a word count, but I now know that it is less than this article.