He allegedly used Stable Diffusion, a text-to-image generative AI model, to create “thousands of realistic images of prepubescent minors,” prosecutors said.
OMG. Every other post is saying their disgusted about the images part but it’s a grey area, but he’s definitely in trouble for contacting a minor.
Cartoon CSAM is illegal in the United States. AI images of CSAM fall into that category. It was illegal for him to make the images in the first place BEFORE he started sending them to a minor.
Yeah that’s toothless. They decided there is no particular way to age a cartoon, they could be from another planet that simply seem younger but are in actuality older.
It’s bunk, let them draw or generate whatever they want, totally fictional events and people are fair game and quite honestly I’d Rather they stay active doing that then get active actually abusing children.
Outlaw shibari and I guarantee you’d have multiple serial killers btk-ing some unlucky souls.
The problem with AI CSAM generation is that the AI has to be trained on something first. It has to somehow know what a naked minor looks like. And to do that, well… You need to feed it CSAM.
So is it right to be using images of real children to train these AI? You’d be hard-pressed to find someone who thinks that’s okay.
You make the assumption that the person generating the images also trained the AI model. You also make assumptions about how the AI was trained without knowing anything about the model.
Are there any guarantees that harmful images weren’t used in these AI models? Based on how image generation works now, it’s very likely that harmful images were used to train the data.
And if a person is using a model based on harmful training data, they should be held responsible.
However, the AI owner/trainer has even more responsibility in perpetuating harm to children and should be prosecuted appropriately.
And if a person is using a model based on harmful training data, they should be held responsible.
I will have to disagree with you for several reasons.
You are still making assumptions about a system you know absolutely nothing about.
By your logic anything born from something that caused suffering from others (this example is AI trained on CSAM) the users of that product should be held responsible for the crime committed to create that product.
Does that apply to every product/result created from human suffering or just the things you don’t like?
Will you apply that logic to the prosperity of Western Nations built on the suffering of indigenous and enslaved people? Should everyone who benefit from western prosperity be held responsible for the crimes committed against those people?
What about medicine? Two examples are The Tuskegee Syphilis Study and the cancer cells of Henrietta Lacks. Medicine benefited greatly from these two examples but crimes were committed against the people involved. Should every patient from a cancer program that benefited from Ms. Lacks’ cancer cells also be subject to pay compensation to her family? The doctors that used her cells without permission didn’t.
Should we also talk about the advances in medicine found by Nazis who experimented on Jews and others during WW2? We used that data in our manned space program paving the way to all the benefits we get from space technology.
The difference between the things you’re listing and SAM is that those other things have actual utility outside of getting off. Were our phones made with human suffering? Probably but phones have many more uses than making someone cum. Are all those things wrong? Yea, but at least good came out of it outside of just giving people sexual gratification directly from the harm of others.
Are there any guarantees that harmful images weren’t used in these AI models?
Lol, highly doubt it. These AI assholes pretend that all the training data randomly fell into the model (off the back of a truck) and that they cannot possibly be held responsible for that or know anything about it because they were too busy innovating.
There’s no guarantee that most regular porn sites don’t contain csam or other exploitative imagery and video (sex trafficking victims). There’s absolutely zero chance that there’s any kind of guarantee.
the AI has to be trained on something first. It has to somehow know what a naked minor looks like. And to do that, well… You need to feed it CSAM.
First of all, not every image of a naked child is CSAM. This is actually been kind of a problem with automated CSAM detection systems triggering false positives on non-sexual images, and getting innocent people into trouble.
But also, AI systems can blend multiple elements together. They don’t need CSAM training material to create CSAM, just the individual elements crafted into a prompt sufficient to create the image while avoiding any safeguards.
You ignored the second part of their post. Even if it didn’t use any csam is it right to use pictures of real children to generate csam? I really don’t think it is.
There are probably safeguards in place to prevent the creation of CSAM, just like there are for other illegal and offensive things, but determined people work around them.
The images were created using photos of real children even if said photos weren’t CSAM (which can’t be guaranteed they weren’t). So the victims were are the children used to generate CSAM
Sure, but isn’t the the perpetrator the company that trained the model without their permission? If a doctor saves someone’s life using knowledge based on nazi medical experiments, then surely the doctor isn’t responsible for the crimes?
So is the car manufacturer responsible if someone drives their car into the sidewalk to kill some people?
Your analogy doesn’t match the premise. (Again assuming there is no csam in the training data which is unlikely) the training data is not the problem it is how the data is used. Using those same picture to generate photos of medieval kids eating ice cream with their family is fine. Using it to make CSAM is not.
It would be more like the doctor using the nazi experiments to do some other fucked up experiments.
Sorry, my app glitched out and posted my comment multiple times, and got me banned for spamming…
Now that I got unbanned I can reply.
So is the car manufacturer responsible if someone drives their car into the sidewalk to kill some people?
In this scenario no, because the crime was in how someone used the car, not in the creation of the car. The guy in this story did commit a crime, but for other reasons. I’m just saying that if you are claiming that children in the training data are victims of some crime, then that crime was committed when training the model. They obviously didn’t agree for their photos to be used that way, and most likely didn’t agree for their photos to be used for AI training at all. So by the time this guy came around, they were already victims, and would still be victims if he didn’t.
Let’s do a thought experiment, and I’d look to to tell me at what point a victim was introduced:
I legally acquire pictures of a child, fully clothed and everything
I draw a picture based on those legal pictures, but the subject is nude or doing sexually explicit things
I keep the picture for my own personal use and don’t distribute it
Or with AI:
I legally acquire pictures of children, fully clothed and everything
I legally acquire pictures of nude adults, some doing sexually explicit things
I train an AI on a mix of 1&2
I generate images of nude children, some of them doing sexually explicit things
I keep the pictures for my own personal use and don’t distribute any of them
I distribute my model, using the right to distribute from the legal acquisition of those images
At what point did my actions victimize someone?
If I distributed those images and those images resemble a real person, then that real person is potentially a victim.
I will say someone who does this creepy and I don’t want them anywhere near children (especially mine, and yes, I have kids), but I don’t think it should be illegal, provided the source material is legal. But as soon as I distribute it, there absolutely could be a victim. Being creepy shouldn’t be a crime.
I think it should be illegal to make porn of a person without their permission regardless of if it was shared or not. Imagine the person it is based off of finds out someone is doing that. That causes mental strain on the person. Just like how revenge porn doesn’t actively harm a person but causes mental strafe (both the initial upload and continued use of it). For scenario 1 it would be at step 2 when the porn is made of the person. For scenario 2 it would be a mix between step 3 and 4.
Thanks for sharing! I’m going to disagree with pretty much everything, so please stop reading here if you’re not interested.
Imagine the person it is based off of finds out someone is doing that. That causes mental strain on the person…
Sure, and there are plenty of things that can cause mental strain, but that doesn’t make those things illegal. For example:
public display of affection - could cause mental stain people who recently broke up or haven’t found love
drug use - recovering addicts could experience mental strain
finding out someone is masturbating to a picture of you
And so on. Those things aren’t illegal, but someone could experience mental strain from them. Experiencing that doesn’t make you a victim, it just means you experience it.
revenge porn doesn’t actively harm a person but causes mental strafe
Revenge porn damages someone’s reputation, at the very least, which is a large part of why it’s illegal.
Someone keeping those images for private use doesn’t cause harm, therefore it shouldn’t be illegal.
Someone doing something creepy for their own use should never be illegal.
It has to somehow know what a naked minor looks like.
Not necessarily
You need to feed it CSAM
You don’t. You just need lists of other things, properly tagged. If you feed an AI a bunch of clothed adults and a bunch of naked adults, it will, in theory, “understand” the difference between being clothed and naked and create any of its clothed adults, naked.
With that initial set above, you feed it a bunch of clothed children. When you ask for a naked child, it will either produce a child head with naked adult body, or a “weird” naked child. It “understands” that adult and child are different things, that clothed and naked are different things, and tries to infer what “naked child” looks like from what it “knows”.
So is it right to be using images of real children to train these AI?
This is the real question and one I don’t know the answer to, because it will boil down to consent to being part of a training model, whether your own as an adult, or a child’s parent, much like how it works for stock photos and videos.
“I consent to having my likeness used for AI training models, except for any use that involves NSFW content” - Fair enough. Good luck enforcing that.
My main issue with generation is the ability of making it close enough to reality. Even with the more realistic art stuff, some outright referenced or even traced CSAM. The other issue is the lack of easy differentiation between reality and fiction, and it muddies the water. “I swear officer, I thought it was AI” would become the new “I swear officer, she said she was 18”.
That would mean you need to enforce the law for whoever built the model. If the original creator has 100TB of cheese pizza, then they should be the one who gets arrested.
Otherwise you’re busting random customers at a pizza shop for possession of the meth the cook smoked before his shift.
There is also the issue of determining if a given image is real or AI. If AI were legal, that means prosecution would need to prove images are real and not AI with the risk of letting go real offenders.
The need to ban AI CSAM is even clearer than cartoon CSAM.
And in the process force non abusers to seek their thrill with actual abuse, good job I’m sure the next generation of children will appreciate your prudish factually inept effort. We’ve tried this with so much shit, prohibition doesn’t stop anything or just creates a black market and a abusive power system to go with it.
You can say pedophile… that “pdf file” stuff is so corny and childish. Hey guys lets talk about a serious topic by calling it things like “pdf files” and “Graping”. Jfc
Tiktok and Instagram are the main culprits, they’ll shadowban, or outright delist, any content that uses no-no words. Sex, rape, assault, drugs, die, suicide, it’s a rather big list
That’s the issue though. As far as I know it hasn’t been tested in court and it’s quite possible the law is useless and has no teeth.
With AI porn you can point to real victims whose unconsented pictures were used to train the models, and say that’s abuse. But when it’s just a drawing, who is the victim? Is it just a thought crime? Can we prosecute those?
I thought cartoons/illustrations of that nature were only illegal in the UK (Coroners and Justices Act 2008) and Switzerland. TIL about the PROTECT Act.
The thing about the PROTECT Act is that it relies on the Miller test, which has obvious holes, and is like depends on who is reviewing it and stuff. I have heard even the UK law has holes which can be exploited.
Most people instead have a trip to a place where underage sex workers are common, one can just have an external hard drive and/or a USB stick for that material which they hide. "An"caps are actively trying to form their own countries, partly to legalize “recordings of crimes” as they like to call them, if not outright to legalize child rape and child sex trafficking.
While I do think realistic stuff should be illegal, no question, with the loli/shota/whatever, you’re just opening a can of worms that could be applied to other things too, and some already did.
Regulators used the very same “normalizing certain sexual acts” to try and censor more extreme form of porn and/or the sexual acts themselves, and partly succeeded in the UK. Sure, scat is gross, many like that exactly due to that. One could even talk about the health risks too. Same with fisting, which is too extreme for many, supposed to be extremely painful because many people’s only exposure to it was from Requiem for a Dream, and has some associated health risks. However, a lot of it is some misrepresentation of the truth, with scat isn’t that big of a health risk if you have a good immune system (rest can be mitigated with precautions and moderation), and fisting isn’t inherently painful (source: me).
And the same is true about loli/shota. The terms aren’t just applied to actual underage characters, but for the “short adults” common within the VTubing scene, many of which are also shorter in real life (obligatory “of course not all”). Some of those other characters are also adults, that have exaggerated, almost child-like physique. Most of it however is still just some depiction of children, and otherwise I can understand why some wants to abstain from even the “adult loli/shota” stuff. I remember when pubic hair removal was becoming mainstream, and many, like radical feminists, feared it would normalize pedophilia, I even got called a pedo by a pubic hair connoisseur for not really liking it. I also don’t really want to talk over victims of CSA, many of who want it banned, many of who want it legal.
As for normalizing: The greatest normalization is done by pedos getting into the fandom to recruit others, and entertain the idea of a lower age of consent. For a long time, we threw out these motherfuckers from our community. But then 4chan happened, and suddenly these very same people just started screaming “it’s just an edgy joke bro”, so at one point people trying to keep these creeps out of the anime community in general became villainified, and with gamergate and the culture wars hitting the scene, “gatekeeping the normies” became the priority, so these sick fucks became a feature, which created in the anime community
a nazi/pedo/weird gatekeeping free space,
and a space that doesn’t moralize about loli/shota.
I had a lot of connections to victims of CSA, most of them were teens, none were groomed by loli/shota (everyone’s mileage will vary on it, likely different in the age of the internet), but by either some non-pornographic work featuring a teen girl and an older man (usually in historic setting), or just by the perpetrator likening a 25+yo guy (often they lied they were way younger) going out with a 14 yo girl to her parents age gap (I’m in Hungary, where that’s technically legal🤮). Usually a simple “that big age gap isn’t okay in your age” talk did wonders, unless the only way for the girl to eat that day was to go out with that guy.
OMG. Every other post is saying their disgusted about the images part but it’s a grey area, but he’s definitely in trouble for contacting a minor.
Cartoon CSAM is illegal in the United States. AI images of CSAM fall into that category. It was illegal for him to make the images in the first place BEFORE he started sending them to a minor.
https://www.thefederalcriminalattorneys.com/possession-of-lolicon
https://en.wikipedia.org/wiki/PROTECT_Act_of_2003
Yeah that’s toothless. They decided there is no particular way to age a cartoon, they could be from another planet that simply seem younger but are in actuality older.
It’s bunk, let them draw or generate whatever they want, totally fictional events and people are fair game and quite honestly I’d Rather they stay active doing that then get active actually abusing children.
Outlaw shibari and I guarantee you’d have multiple serial killers btk-ing some unlucky souls.
Exactly. If you can’t name a victim, it shouldn’t be illegal.
The problem with AI CSAM generation is that the AI has to be trained on something first. It has to somehow know what a naked minor looks like. And to do that, well… You need to feed it CSAM.
So is it right to be using images of real children to train these AI? You’d be hard-pressed to find someone who thinks that’s okay.
You make the assumption that the person generating the images also trained the AI model. You also make assumptions about how the AI was trained without knowing anything about the model.
Are there any guarantees that harmful images weren’t used in these AI models? Based on how image generation works now, it’s very likely that harmful images were used to train the data.
And if a person is using a model based on harmful training data, they should be held responsible.
However, the AI owner/trainer has even more responsibility in perpetuating harm to children and should be prosecuted appropriately.
I will have to disagree with you for several reasons.
The difference between the things you’re listing and SAM is that those other things have actual utility outside of getting off. Were our phones made with human suffering? Probably but phones have many more uses than making someone cum. Are all those things wrong? Yea, but at least good came out of it outside of just giving people sexual gratification directly from the harm of others.
The topic that you’re choosing to focus on really interesting. what are your values?
My values are none of your business. Try attacking my arguments instead of looking for something about me to attack.
deleted by creator
If everywhere you go, everyone is abnormal, I have news for you
Lol, highly doubt it. These AI assholes pretend that all the training data randomly fell into the model (off the back of a truck) and that they cannot possibly be held responsible for that or know anything about it because they were too busy innovating.
There’s no guarantee that most regular porn sites don’t contain csam or other exploitative imagery and video (sex trafficking victims). There’s absolutely zero chance that there’s any kind of guarantee.
First of all, not every image of a naked child is CSAM. This is actually been kind of a problem with automated CSAM detection systems triggering false positives on non-sexual images, and getting innocent people into trouble.
But also, AI systems can blend multiple elements together. They don’t need CSAM training material to create CSAM, just the individual elements crafted into a prompt sufficient to create the image while avoiding any safeguards.
You ignored the second part of their post. Even if it didn’t use any csam is it right to use pictures of real children to generate csam? I really don’t think it is.
There are probably safeguards in place to prevent the creation of CSAM, just like there are for other illegal and offensive things, but determined people work around them.
If the images were generated from CSAM, then there’s a victim. If they weren’t, there’s no victim.
The images were created using photos of real children even if said photos weren’t CSAM (which can’t be guaranteed they weren’t). So the victims were are the children used to generate CSAM
Sure, but isn’t the the perpetrator the company that trained the model without their permission? If a doctor saves someone’s life using knowledge based on nazi medical experiments, then surely the doctor isn’t responsible for the crimes?
So is the car manufacturer responsible if someone drives their car into the sidewalk to kill some people?
Your analogy doesn’t match the premise. (Again assuming there is no csam in the training data which is unlikely) the training data is not the problem it is how the data is used. Using those same picture to generate photos of medieval kids eating ice cream with their family is fine. Using it to make CSAM is not.
It would be more like the doctor using the nazi experiments to do some other fucked up experiments.
(Also you posted your response like 5 times)
Sorry, my app glitched out and posted my comment multiple times, and got me banned for spamming… Now that I got unbanned I can reply.
In this scenario no, because the crime was in how someone used the car, not in the creation of the car. The guy in this story did commit a crime, but for other reasons. I’m just saying that if you are claiming that children in the training data are victims of some crime, then that crime was committed when training the model. They obviously didn’t agree for their photos to be used that way, and most likely didn’t agree for their photos to be used for AI training at all. So by the time this guy came around, they were already victims, and would still be victims if he didn’t.
Let’s do a thought experiment, and I’d look to to tell me at what point a victim was introduced:
Or with AI:
At what point did my actions victimize someone?
If I distributed those images and those images resemble a real person, then that real person is potentially a victim.
I will say someone who does this creepy and I don’t want them anywhere near children (especially mine, and yes, I have kids), but I don’t think it should be illegal, provided the source material is legal. But as soon as I distribute it, there absolutely could be a victim. Being creepy shouldn’t be a crime.
I think it should be illegal to make porn of a person without their permission regardless of if it was shared or not. Imagine the person it is based off of finds out someone is doing that. That causes mental strain on the person. Just like how revenge porn doesn’t actively harm a person but causes mental strafe (both the initial upload and continued use of it). For scenario 1 it would be at step 2 when the porn is made of the person. For scenario 2 it would be a mix between step 3 and 4.
Thanks for sharing! I’m going to disagree with pretty much everything, so please stop reading here if you’re not interested.
Sure, and there are plenty of things that can cause mental strain, but that doesn’t make those things illegal. For example:
And so on. Those things aren’t illegal, but someone could experience mental strain from them. Experiencing that doesn’t make you a victim, it just means you experience it.
Revenge porn damages someone’s reputation, at the very least, which is a large part of why it’s illegal.
Someone keeping those images for private use doesn’t cause harm, therefore it shouldn’t be illegal.
Someone doing something creepy for their own use should never be illegal.
Removed by mod
Removed by mod
Removed by mod
Removed by mod
I hate the no victim argument.
Why? Can you elaborate?
Not necessarily
You don’t. You just need lists of other things, properly tagged. If you feed an AI a bunch of clothed adults and a bunch of naked adults, it will, in theory, “understand” the difference between being clothed and naked and create any of its clothed adults, naked.
With that initial set above, you feed it a bunch of clothed children. When you ask for a naked child, it will either produce a child head with naked adult body, or a “weird” naked child. It “understands” that adult and child are different things, that clothed and naked are different things, and tries to infer what “naked child” looks like from what it “knows”.
This is the real question and one I don’t know the answer to, because it will boil down to consent to being part of a training model, whether your own as an adult, or a child’s parent, much like how it works for stock photos and videos.
“I consent to having my likeness used for AI training models, except for any use that involves NSFW content” - Fair enough. Good luck enforcing that.
My main issue with generation is the ability of making it close enough to reality. Even with the more realistic art stuff, some outright referenced or even traced CSAM. The other issue is the lack of easy differentiation between reality and fiction, and it muddies the water. “I swear officer, I thought it was AI” would become the new “I swear officer, she said she was 18”.
That is not an end user issue, that’s a dev issue. Can’t train on scam if it isn’t available and as such is tacit admission of actual possession.
I think the challenge with Generative AI CSAM is the question of where did training data originate? There has to be some questionable data there.
That would mean you need to enforce the law for whoever built the model. If the original creator has 100TB of cheese pizza, then they should be the one who gets arrested.
Otherwise you’re busting random customers at a pizza shop for possession of the meth the cook smoked before his shift.
There is also the issue of determining if a given image is real or AI. If AI were legal, that means prosecution would need to prove images are real and not AI with the risk of letting go real offenders.
The need to ban AI CSAM is even clearer than cartoon CSAM.
And in the process force non abusers to seek their thrill with actual abuse, good job I’m sure the next generation of children will appreciate your prudish factually inept effort. We’ve tried this with so much shit, prohibition doesn’t stop anything or just creates a black market and a abusive power system to go with it.
Would Lisa Simpson be 8 years old, or 43 because the Simpsons started in 1989?
Big brain PDF tells the judge it is okay because the person in the picture is now an adult.
You can say pedophile… that “pdf file” stuff is so corny and childish. Hey guys lets talk about a serious topic by calling it things like “pdf files” and “Graping”. Jfc
Why do people say “graping?” I’ve never heard that.
Please tell me it doesn’t have to do with “The Grapist” video that came out on early YouTube.
To avoid censorship filters in social media, same with PDF files.
Tiktok and Instagram are the main culprits, they’ll shadowban, or outright delist, any content that uses no-no words. Sex, rape, assault, drugs, die, suicide, it’s a rather big list
That’s the issue though. As far as I know it hasn’t been tested in court and it’s quite possible the law is useless and has no teeth.
With AI porn you can point to real victims whose unconsented pictures were used to train the models, and say that’s abuse. But when it’s just a drawing, who is the victim? Is it just a thought crime? Can we prosecute those?
I thought cartoons/illustrations of that nature were only illegal in the UK (Coroners and Justices Act 2008) and Switzerland. TIL about the PROTECT Act.
The thing about the PROTECT Act is that it relies on the Miller test, which has obvious holes, and is like depends on who is reviewing it and stuff. I have heard even the UK law has holes which can be exploited.
Several countries prohibit any fictional depictions of child porn, whether drawn, written or otherwise. Wikipedia has an interesting list on that - https://en.wikipedia.org/wiki/Legality_of_child_pornography
I wonder if there is significant migration happening into those countries where csam os legal.
Unlikely. Tourism, on the other hand…
Most people instead have a trip to a place where underage sex workers are common, one can just have an external hard drive and/or a USB stick for that material which they hide. "An"caps are actively trying to form their own countries, partly to legalize “recordings of crimes” as they like to call them, if not outright to legalize child rape and child sex trafficking.
deleted by creator
deleted by creator
Sure, and then some judge starts making subjective decisions on drawn/painted art that didn’t hurt anyone and suddenly people are getting hurt.
The justice system is supposed to protect society, not hurt people you don’t like.
While I do think realistic stuff should be illegal, no question, with the loli/shota/whatever, you’re just opening a can of worms that could be applied to other things too, and some already did.
Regulators used the very same “normalizing certain sexual acts” to try and censor more extreme form of porn and/or the sexual acts themselves, and partly succeeded in the UK. Sure, scat is gross, many like that exactly due to that. One could even talk about the health risks too. Same with fisting, which is too extreme for many, supposed to be extremely painful because many people’s only exposure to it was from Requiem for a Dream, and has some associated health risks. However, a lot of it is some misrepresentation of the truth, with scat isn’t that big of a health risk if you have a good immune system (rest can be mitigated with precautions and moderation), and fisting isn’t inherently painful (source: me).
And the same is true about loli/shota. The terms aren’t just applied to actual underage characters, but for the “short adults” common within the VTubing scene, many of which are also shorter in real life (obligatory “of course not all”). Some of those other characters are also adults, that have exaggerated, almost child-like physique. Most of it however is still just some depiction of children, and otherwise I can understand why some wants to abstain from even the “adult loli/shota” stuff. I remember when pubic hair removal was becoming mainstream, and many, like radical feminists, feared it would normalize pedophilia, I even got called a pedo by a pubic hair connoisseur for not really liking it. I also don’t really want to talk over victims of CSA, many of who want it banned, many of who want it legal.
As for normalizing: The greatest normalization is done by pedos getting into the fandom to recruit others, and entertain the idea of a lower age of consent. For a long time, we threw out these motherfuckers from our community. But then 4chan happened, and suddenly these very same people just started screaming “it’s just an edgy joke bro”, so at one point people trying to keep these creeps out of the anime community in general became villainified, and with gamergate and the culture wars hitting the scene, “gatekeeping the normies” became the priority, so these sick fucks became a feature, which created in the anime community
I had a lot of connections to victims of CSA, most of them were teens, none were groomed by loli/shota (everyone’s mileage will vary on it, likely different in the age of the internet), but by either some non-pornographic work featuring a teen girl and an older man (usually in historic setting), or just by the perpetrator likening a 25+yo guy (often they lied they were way younger) going out with a 14 yo girl to her parents age gap (I’m in Hungary, where that’s technically legal🤮). Usually a simple “that big age gap isn’t okay in your age” talk did wonders, unless the only way for the girl to eat that day was to go out with that guy.
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
the fuck was that spam supposed to do?
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod