• 84 Posts
  • 452 Comments
Joined 3 years ago
cake
Cake day: June 24th, 2023

help-circle

  • I was already posting on web forums (also wikis) before Facebook or Twitter became popular, when the Internet was not yet very established and posting things on it oneself was something only few people thought of doing.

    I was outright excited when I saw “social media” becoming more mainstream. I thought at the time, at least more people are using the Internet, even if it’s “just” Facebook or Twitter (which I didn’t and still don’t see much value in), at least it’s the Internet, that’s a good thing because the Internet is a great and exciting thing for society and a wonderful source of entertainment!

    Now we live in a world where the general public mostly only knows how to operate social media apps, otherwise has no tech proficiency at all, doesn’t even know what else is out there on the Internet, and doesn’t know or care how the social media apps they’re using are designed to manipulate them. And politicians are busy working to make it harder for good idealistic people to solve those problems. :(











  • I think big tech has proven that it cannot be trusted. Their priorities are simply not in alignment with our own.

    agreed

    Legislation seems to be the only lever that can hope to rein them in (market forces are no longer strong enough).

    I don’t agree. The Internet, at least when not regulated to death, allows new websites to rise and old ones to fall, this has happened many times and can happen again in the future.

    At the same time, smaller networks do not have the resources to comply with government regulations to a T

    agreed

    and so they should be given a longer leash

    Not easy to implement in terms of legislation.

    Governments also do not have the resources to chase down

    and you want to rely on governments not having resources to do things that laws say they could do?


  • algorithms are

    Everything that happens on a computer is based on algorithms. Chronological sorting of everything you’re following is still an algorithm. But I get what you mean.

    I agree with you that modern personalized recommendation algorithms like the big social media platforms are based on are not a good thing (for people of any age). They break the Internet’s original promise that it should be the general public who decides on what we exchange ideas about on the Internet. They turn social media operators into (essentially) media companies by picking winners with lots of reach and losers with little reach…

    But none of that has anything to do with how old any users are.


  • u wot m8

    The article simultaneously takes the positions:

    • that it’s a good and acceptable thing that governments are banning social media for young people, prescribing how social media companies must design their platforms, that the recent court ruling on “social media addiction” was well decided. (in the section “How Governments Are Regulating Social Media”)
    • that we should move to services independent from big tech companies, such as the fediverse. (in the section “How Social Media Platforms Could Be Redesigned”)

    Do they not see that these are, at least in practice, contradictory positions? For big tech companies, it’s possible to comply with the kinds of government regulations described there, they have hordes of lawyers who can advise them how to do that. For fediverse instance admins meanwhile, it is a lot more difficult to do that. The future of the fediverse absolutely depends on governments staying out of the Internet as much as possible, especially from applying their laws to foreign website operators. All that government regulation does is make sure no one who doesn’t have a revenue from which they can pay any claims they are liable for can ever operate a website where users can participate.



  • The harm this law aims to address is grave and real. For the 99% of the population who aren’t compiling their own kernels, the ability to “age-lock” a child account to prevent young children from accessing doomscroll brainrot on Instagram is an amazing and valuable feature.

    I disagree even with this premise. I reject the idea that it’s legitimate to want to keep young people from seeing, watching, reading things that they actively want to see/watch/read simply because we have a vague idea that “it’s not good for them”.

    My parents too unfortunately agreed with your idea, and I remember being a (teenaged) minor and worried that my parents might find out too much about what I’ve been reading and doing on the Internet and punish me for it, I don’t wish that on anyone who happened to be born after me. I hereby resolve that if I ever have children, they will not have to worry about this. I think it is a very good thing that modern technology makes it somewhat harder for parents to oppress their children in such a manner.

    But there’s nothing inherently wrong with OS developers implementing such a feature if that is what their customers want. There’s a lot wrong with the government mandating it.

    The principled “linux source code is free-speech, and no government mandates can compel changes” stance is quite divorced from reality.

    No, it’s an exactly correct legal analysis; at least morally, and should be legally.

    Are crypto-exchange founders likewise free to implement whatever fraudulent schemes they like, as their source code is their speech to freely dictate?

    I’m not sure what scenario you have in mind. Distributing software (even software that can be used for illegal activities) is free speech. Running and using software isn’t (automatically) speech, it’s an action that can be declared to be criminal. Anyone can use Thunderbird to send phishing emails, but it would be absurd to prosecute the developers of Thunderbird for that.

    I agree with the idea that a user account with an age field is less bad than actual (biometric or ID-based) age verification.

    The rest of your post is so full of meaningless buzzwords that it’s impossible to write anything coherent about it.




  • With chat control we actually have to distinguish two different things that people sometimes confuse:

    • voluntary chat control (“chat control 1.0”), which is currently already the law in the EU
    • mandatory chat control (“chat control 2.0”), proposed in 2022

    Voluntary chat control is about letting operators of communication services voluntarily scan messages for certain illegal activity (without this constituting a violation of data protection laws). This doesn’t break encryption and isn’t a part of a war on general purpose computing. While there are many good arguments against it, it’s not especially catastrophic. It’s a detail of business regulation.

    Mandatory chat control is about forcing them to do so, which must necessarily break encryption and impose limits on software freedom. This is what is most important to oppose.

    The most recent win ended up rejecting even (most) voluntary chat control, which is a good sign that mandatory chat control won’t get a majority either.