

Wake me up when it can run Adobe Lightroom.


Wake me up when it can run Adobe Lightroom.
Home Assistant! I got a cheap refurbished mini PC for 60 bucks and a zigbee stick for 12 or so, been running HA for a year now, it works very well and pairs perfectly with the IKEA zigbee stuff. I have it read the alarm from my phone and turn on radio on a wifi speaker and smart lights at that time. Kitchen light is now automatic with a little motion sensor and ESP I got from AliExpress for a few bucks. Everything is completely local, no internet access. It’s great!
Epstein was convicted of procurement of minors to engage in prostitution in 2008, Chomsky met with him in 2015. That’s enough hindsight for Chomsky as well if you ask me.
It opens up that possibility, but there is not proof of him doing anything wrong.
Spending time and discussing politics with a convicted sex trafficker as well as getting on his “Lolita express” is already wrong enough in my opinion.


I don’t get it either. I’m pretty sure it’s just marketing bullshit and many people are falling for it. Same with bluetooth headphones and codecs. I wouldn’t be surprised if the difference between LDAC and AAC on an average bluetooth headset wouldn’t even be scientifically measurable.


I think this comment made me finally understand the AI hate circlejerk on lemmy. If you have no clue how LLMs work and you have no idea where “AI” is coming from, it just looks like another crappy product that was thrown on the market half-ready. I guess you can only appreciate the absolutely incredible development of LLMs (and AI in general) that happened during the last ~5 years if you can actually see it in the first place.


No idea why you’re getting downvoted. People here don’t seem to understand even the simplest concepts of consciousness.


What is “actual intelligence” then?


You know, and I think it’s actually the opposite. Anyone pretending their brain is doing more than pattern recognition and AI can therefore not be “intelligence” is a fucking idiot.


This. Same with the discussion about consciousness. People always claim that AI is not real intelligence, but no one can ever define what real/human intelligence is. It’s like people believe in something like a human soul without admitting it.


AI isn’t math formulas though. AI is a complex dynamic system reacting to external input. There is no fundamental difference here to a human brain in that regard imo. It’s just that the processing isn’t happening in biological tissue but in silicon. Is it way less complex than a human? Sure. Is there a fundamental qualitative difference? I don’t think so. What’s the qualitative difference in your opinion?


You’re getting downvoted but I absolutely agree. I don’t understand why “AI algorithms are just math, therefore they can’t have consciousness” seems to be the predominant view even among people interested in the topic. I haven’t heard a single convincing argument why “math” is fundamentally different from human brains. Sure, current AI is way less complex and doesn’t have a continuous stream of perceptual input. But that’s something a “proper” humanoid robot would need to have, and processing power will increase as well.


You talk like you know what the requirements for consciousness are. How do you know? As far as I know that’s an unsolved philosophical and scientific problem. We don’t even know what consciousness really is in the first place. It could just be an illusion.


I just hope we don’t have to go through the whole Third Reich thing again like when TV was invented.


Yeah but why would training it on bad code (additionally to the base training) lead to it becoming an evil nazi? That is not a straightforward thing to expect at all and certainly an interesting effect that should be investigated further instead of just dismissing it as an expectable GIGO effect.


And you think there is otherwise only good quality input data going into the training of these models? I don’t think so. This is a very specific and fascinating observation imo.


It’s not that easy. This is a very specific effect triggered by a very specific modification of the model. It’s definitely very interesting.


Of course material science is technology lol
I use it for little Python projects where it’s really really useful.
I’ve used it for linux problems where it gave me the solution to problems that I had not been able to solve with a Google search alone.
I use it as a kickstarter for writing texts by telling it roughly what my text needs to be, then tweaking the result it gives me. Sometimes I just use the first sentence but it’s enough to give me a starting point to make life easer.
I use it when I need to understand texts about a topic I’m not familiar with. It can usually give me an idea of what the terminology means and how things are connected which helps a lot for further research on the topic and ultimately undestanding the text.
I use it for everyday problems like when I needed a new tube for my bike but wasn’t sure what size it was so I told it what was written on the tyre and showed it a picture of the tube packaging while I was in the shop and asked it if it was the right one. It could tell my that it is the correct one and why. The explanation was easy to fact-check.
I use Photoshop AI a lot to remove unwanted parts in photos I took or to expand photos where I’m not happy with the crop.
Honestly, I absolutely love the new AI tools and I think people here are way too negative about it in general.
Last time I checked, Winboat just wasn’t there yet, see e.g. https://www.xda-developers.com/tried-cutting-windows-out-my-life-with-winboat/
However, your comment made me google again, and I found this thread that looks pretty promising. Will definitely investigate further. So thanks! https://www.reddit.com/r/linux_gaming/comments/1qdgd73/i_made_adobe_cc_installers_work_on_linux_pr_in/?tl=de