• 0 Posts
  • 4 Comments
Joined 1 year ago
cake
Cake day: August 28th, 2023

help-circle

  • I respectfully disagree. Any high quality creator is tangibly penalized by YouTube’s recommendation algorithm for not optimizing their titles and thumbnails. A rare few choose to take this penalty but I don’t blame the many quality creators who choose to take part in the game that YouTube has made for everyone.

    Yes, the alternate titles may not be perfect, but I’d take any random person’s attempt at a title over the hyper optimized ones any day because I’d rather make an informed decision to watch something even if there is some degree of inaccuracy than to make a completely uninformed decision based on what an algorithm predicted would most likely get me to click and get hooked on a video irregardless of my own will and whether I am satisfied at the end of watching it.


  • Sorry but has anyone in this thread actually tried running local LLMs on CPU? You can easily run a 7B model at varying levels of quantization (ie. 5 bit quantization) and get a generalized prompt-able LLM. Yeah, of course it’s going to take ~4GB of RAM (which is mem-mapped and paged into memory), but you can easily fine tune smaller more specific models (like the translation one mentioned above) and have surprising intelligence at a fraction of the resources.

    Take, for example, phi-2 which performs as well as 13B param models but with 2.7B params. Yeah, that’s still going to take 1.5GB RAM which Firefox wouldn’t reasonably ship, but many lighter weight specialized tasks could easily use something like a fine tuned 0.3B model with quantization.