• 0 Posts
  • 21 Comments
Joined 9 months ago
cake
Cake day: September 27th, 2023

help-circle






  • There are quite a few other roguelike (or roguelike adjacient) games that do beat it handily. To give a few examples:

    DF started development in October 2002 (according to their own website, scroll all the way down.)

    UnReal World’s first release was in 1992 and is also still getting regular updates.

    NetHack has gotten new versions ever since 1987. The latest big change was 3.6.0 in 2015, 3.6.7 came out in early 2023 but there’s no reason to believe there won’t be a next version. If we count that in 1987 it started as a fork of Hack, we could even add another 3 years in the front as Hack was published in 1984.

    Edit: I just realized: In the world of MMORPGs we also have a few examples: Everquest which came out in 2000 and is still getting expansions. Even WoW isn’t too far behind with a 2004 release date, which probably means development began before DF’s development too.


  • That was a response I got from ChatGPT with the following prompt:

    Please write a one sentence answer someone would write on a forum in a response to the following two posts:
    post 1: “You sure? If it’s another bot at the other end, yeah, but a real person, you recognize ChatGPT in 2 sentences.”
    post 2: “I was going to disagree with you by using AI to generate my response, but the generated response was easily recognizable as non-human. You may be onto something lol”

    It’s does indeed have an AI vibe, but I’ve seen scammers fall for more obvious pranks than this one, so I think it’d be good enough. I hope it fooled at least a minority of people for a second or made them do a double take.



  • It’s not as accurate as you’d like it to be. Some issues are:

    • It’s quite lossy.
    • It’ll do better on images containing common objects vs rare or even novel objects.
    • You won’t know how much the result deviates from the original if all you’re given is the prompt/conditioning vector and what model to use it on.
    • You cannot easily “compress” new images, instead you would have to either finetune the model (at which point you’d also mess with everyone else’s decompression) or do an adversarial attack onto the model with another model to find the prompt/conditioning vector most likely to create something as close as possible to the original image you have.
    • It’s rather slow.

    Also it’s not all that novel. People have been doing this with (variational) autoencoders (another class of generative model). This also doesn’t have the flaw that you have no easy way to compress new images since an autoencoder is a trained encoder/decoder pair. It’s also quite a bit faster than diffusion models when it comes to decoding, but often with a greater decrease in quality.

    Most widespread diffusion models even use an autoencoder adjacent architecture to “compress” the input. The actual diffusion model then works in that “compressed data space” called latent space. The generated images are then decompressed before shown to users. Last time I checked, iirc, that compression rate was at around 1/4 to 1/8, but it’s been a while, so don’t quote me on this number.

    edit: fixed some ambiguous wordings.




  • I think it’s much more likely whatever scraping they used to get the training data snatched a screenshot of the movie some random internet user posted somewhere. (To confirm, I typed “joaquin phoenix joker” into Google and this very image was very high up in the image results) And of course not only this one but many many more too.

    Now I’m not saying scraping copyrighted material is morally right either, but I’d doubt they’d just feed an entire movie frame by frame (or randomly spaced screenshots from throughout a movie), especially because it would make generating good labels for each frame very difficult.







  • no where near Reddit yet on niche subjects

    I’m always saddened by how not-active some of those subjects are. For example: Even many large games struggle to have dedicated, active communities on Lemmy (assuming I’m not terrible at finding them, which is sadly also possible). Even some of the largest games have only completely dead communities here. A huge draw of Reddit for me was to be able to talk about the games I play with other people who do too. And mostly, the games I’d love to talk about aren’t in the top 10 most played games list.

    Now I could try to (re)vitalize those communities I would love to see around, and I have done so shortly after the exodus (on my previous account that died with the instance it was on). However, there’s only so much talking into the void I can do until it gets boring.

    I also feel like that might be a big issue for people coming over. After I manage to explain to my friends how federation works, they ask me to help them find the [topic of their interest] community, and all I can show them is a community with 10 threads, all over 3 months old and with 0 comments. Sadly it shouldn’t surprise anyone they’re not sticking around after that.