• 0 Posts
  • 79 Comments
Joined 3 years ago
cake
Cake day: August 6th, 2023

help-circle

  • CMR performs better under all workload types.

    Shingled Magnetic Recording overlays the tracks on top of each other like roof shingles. This means you can fit more tracks on the same platter which means you can fit more data. Unfortunately this also means whenever writing data you have to rewrite tracks adjacent to the track you’re rewriting which leads to a lot of reshuffling of data which leads to very slow writes when this is taking place (say you edit a file or replace it, delete some and copy over others).

    SMR allows more storage for less money but it takes a serious performance hit (right now about the largest CMR disks you can get as an example are about 28TB in size, by contrast you can get 40TB SMR disks so it can significantly amplify storage). It shouldn’t be used for many scenarios. For archival backup it’s fine. For disks that are having data changed on them anywhere near regularly it’s not great.

    I want to underline that for USB powered portable 5200RPM disks they’re already slower disks when CMR, so as SMR they get a lot slower in write performance (one I had would drop down to sustained low 20MB/s write speeds when over 60% full). A 7200RPM SMR disk with proper 12V power from a PSU rail or an AC adapter would likely be double that at worst by contrast.

    So SMR has its uses, it has its place. It’s just a lot of people who don’t know might use it in places where CMR is more appropriate and would give them a better experience. So by all means if you’re using SMR in back-up disks to your primary ones to create back-up snapshots that are updated infrequently continue to do so, they’re fine for that especially if the tasks are done on machines that can be left running for days while the data is slowly written.





  • All that would happen absolute worst case scenario if MS breaks this is your users would get a whining complaint about not being activated. Get a small “Activate Windows” logo stuck in the lower right hand of their screen and would lose the ability to change wallpapers, customize windows colors, etc.

    To be clear it wouldn’t break the install and it would leave it in a state in which you could use an updated version of MAS (reminder MAS supports multiple activation options) to fix it remotely.



  • If you’re going intel you can check the ark.intel pages for the processors in the devices you’re looking at. Intel does pretty good documentation so it’ll show you what integrated graphics they have and all that.

    Ideally you want a chip that can do hardware decoding (and if possible encoding if you’re serving media to others and intend for it to transcode and not direct-play) of common codecs so you’re not eating a massive power bill or generating tons of heat or getting bogged down in resource utilization.

    AV1 support is the only tricky part when it comes to hardware decode support. Maybe you don’t use it yourself but typically only the newer chips support hardware decode of AV1 files. Something to consider if that’s likely to be an issue for you if you have or plan to have lots of AV1 encoded files. (Though there is software decode of course)

    The Intel N150 can do a 4K desktop, you won’t be doing 4k gaming on it at all but it can do the desktop and video playback and is a low power consumption chipset. Should be able to support at least 2-3 4k transcodes as well. A lot of enthusiasts use it for just this purpose in fact and it’s fairly snappy for uses like these.

    Anything more powerful than an N150 will be fine as well for 4K video viewing, transcoding, 4k desktop, etc. So if you want to spend more and get a more powerful Intel chip you can. Just avoid 13/14th generation i series (i5/i7/i9) especially used because of the hardware damage bad design did to those and there are a lot of messed up ones floating around from people trying to offload.

    144hz may be the really tricky part. Lots of these mini boxes are capped at 60hz so definitely double-check that. There’s always the option of displayport to HDMI cables too if it has a DP output that supports the necessary 4k framerate. N150 might struggle driving that to be honest.

    Oh and be aware of thermal throttling. Lots of manufacturers stuff Ultra 9 series in things like laptops and minis with inadequate cooling and they thermal throttle like crazy so you pay $800 and get something with the same performance as a properly cooled Ultra 7 or 5 series.

    To loop back around to whether you need a dedicated GPU. You have to ask yourself are you transcoding streams for others or is it mostly direct-play without transcode? Integrated GPU on the CPU die should be good enough unless you have an awful lot of streams going at once or some other pressing need.

    You can run whatever distro you want. There are extremely specialized distros like OSMC (https://osmc.tv/) which is basically kind of like Kodi running on Debian but without a desktop environment (extremely media center focused).




  • If the drive previously wasn’t making this noise (as in it had been filled with data, been in use for days-weeks and wasn’t ever making this noise) and it doesn’t happen in response to data writes (even hours after the fact) then it might be a cause for concern that the drive could be dying.

    In general it’s a good idea to have back-ups of any important data but I’d really ensure that’s the case here and assume it could imminently fail. In general the sound of hard drives changing (that is sounding different in either idle noises or active writing/reading noises) is a cause for concern for potential drive failure though it could be other things and as drives age they can sometimes change sound signatures as mechanical components age without necessarily failing (could go on working fine for years).

    That said there are normal processes in drives that can make noise:

    • Some sort of operation driven by your OS itself, I won’t begin to get into all of them but there could be something accessing things in the background, doing file table or journaling operations, writes, checks, etc on the file system itself, just low level maintenance stuff.

    • SMR drives may continue to write and shuffle data for quite some time after being written to, especially if it was a large amount of data. Though this should still even in the case of multiple terabytes probably be resolved within 12 hours.

    • Many drives, especially high capacity enterprise drives do make a -soft- clicking sound as a result of the arms sweeping the surface when idle but not off to if I recall correctly spread around lubricant or some sort of basic mechanical maintenance. It’s part of the normal drive operations. It’s possible it occurs more frequently in response to a massive amount of writes previously like filling a drive or may not be activated until a certain amount of data is written, I’m not really sure how that works as that would probably be proprietary information to the manufacturer.

    Should I be worried about this? To my paranoid mind it feels like something is slowly reading my files with some exploit to bypass the indicator light to fly under the radar.

    How would it do this? Is it installing hacked firmware to your enclosure too? I doubt you’re that valuable of a target.

    If you’re worried about malware then back up your stuff, nuke the install and reinstall from scratch. I wouldn’t worry about it if this is the only thing you’re seeing and find it unlikely.


  • Majestic@lemmy.mltoLinux@lemmy.mlAntiviruses?
    link
    fedilink
    arrow-up
    14
    ·
    4 months ago

    I would say there are not any worth recommending and that best practices are avoiding running random scripts you don’t understand, keeping software up to date with package managers, and using virtualization tools. Also look into Portmaster perhaps which is an interactive firewall.

    Meta rant on this subject

    What frustrates me about the answers these questions get is no one ever offers tools comparable to Windows tools, perhaps I think increasingly because they simply don’t exist outside of very expensive subscription enterprise offerings that require plunking down no less than a thousand dollars a year. (Certainly none of the major AV vendors offers consumer Linux versions of their software though most offer enterprise endpoint Linux that comes with the caveat of minimum spends of several hundred dollars if not several thousand a year)

    ClamAV is primarily a definition AV, the very weakest and most useless kind. Sure it’s kind of useful to make sure your file server isn’t passing around year old malware but it’s basically useless for real time prevention of emerging and unknown threats. For that you needs HIPS, behavior control, conditional/mandatory access control, heuristics, etc. ClamAV has one of the worst detection rates in the industry. It’s just laughably bad (often under 60%) so it’s really not a front line contender at all.

    Compare clam to consumer offerings with complex behavioral control like ESET, Kaspersky, etc that offered “suite” software that featured the aforementioned HIPS, behavioral control, complex heuristics to detect and in real time block malware-like behavior (for example accessing and then seeking to upload your keepass database files or starting to surreptitiously encrypt all your user files using RSA4096) and it just isn’t in the same ballpark as anything competently done in the last 20 years.

    I haven’t used or relied on a traditional AV for definition detections for years. They’re worthless, it’s impossible to keep up. The AV’s I’ve deployed are for their heuristics, behavior control, HIPS, etc which actually stops new and emerging and unknown threats or at least puts real obstacles in their way. So what Linux needs, what users need is software like that, forget the traditional virus definitions, something with behavior control, HIPS, and some basic heuristics for “gee this sure looks like malware behavior, better ask the user whether they want and intend this”.

    “Just be smart about what you run” isn’t a realistic solution when people say Linux is for everyone including their tech illiterate relatives. Yes, Linux is a lot safer if you just install things from package managers but that isn’t bulletproof either as we’ve seen a number of spectacular impact upstream malware insertions into build repos for huge software projects in recent years.

    Just maintain back-ups isn’t helpful with smart cryptolocker software which may hide itself for weeks or months and encrypt your files as you back them up. Nor does it protect against account compromise from all your passwords being stolen or a keylogger. Nor does it defend you against persecution after being hit by mercenary/government police-ware and spyware from overreaching governments and makes the bar for them getting evidence you’re an illegal gay person or whatever that much lower technically in terms of capabilities.

    Back-ups are disaster recovery. Everyone should have them but part of a layered defense is preventing the disaster and inconvenience and invasion of privacy and so on before it happens. Having your identity stolen or accounts taken over isn’t as simple as reverting to a back-up, it can result in hours, days of phone calls, emails, stress, hassle, etc that can drag on for weeks or months.

    Portmaster is a start for this type of system control and protection as it’s a very effective interactive firewall but as far as I know there aren’t any consumer available comprehensive behavior control + HIPS type Linux desktop security solutions. There are several vendors of default deny mandatory access control with interactive mode for Windows but none offer solutions for Linux that aren’t part of enterprise sized contracts beyond affordability and reason. If anyone knows otherwise I would love to know of these solutions as I want to implement them on my Linux machines as I am not comfortable with just my network IPS and firewall solutions by themselves without comprehensive end-point security.


  • I think the home media collector usecase is actually a complete outlier in terms of what these formats are actually being developed for.

    Well yeah given who makes it but it’s what I care about. I couldn’t care less about obscure and academic efforts (or the profits of some evil tech companies) except as vague curiosities. HEVC wasn’t designed with people like me in mind either yet it means I can have oh 30% more stuff for the same space usage and the enccoders are mature enough that the difference in encode time between it and AVC is negligible on a decently powered server.

    Transparency (or great visual fidelity period) also isn’t likely the top concern here because development is driven by companies that want to save money on bandwidth and perhaps on CDN storage.

    Which I think is a shame. Lower bitrates for transparency -should- be the goal. The goal should be to get streaming content to consumers at a very high quality, ideally close to or equivalent to UHD BluRay for 4k. Instead we get companies that bit-starve and hop onto these new encoders because they can use fewer bits as long as they use plenty of tricks to maintain a certain baseline of perceptual visual image quality that passes the sniff test for your average viewer so instead of getting quality bumps we just get them using less bits and passing the savings onto themselves with little meaningful upgrade in visual fidelity for the viewer. Which is why it’s hard to care at all really about a lot of this stuff if it doesn’t benefit the user in any way really.


  • And which will be so resource intensive to encode with compared to existing standards that it’ll probably take 14 years before home media collectors (or yar har types) are able and willing to use it over HEVC and AV1. :\

    As an example AV1 encodes to this day are extremely rare in the p2p scene. Most groups still work with h264 or h265 even those focusing specifically on reducing sizes while maintaining quality. By contrast HEVC had significant uptake within 3-4 years of its release in the p2p scene (we’re on year 7 for AV1).

    These greedy, race to the bottom device-makers are still fighting AV1. With people keeping devices longer and not upgrading as much as well as tons of people relying on under-powered smart-TVs for watching (forcing streaming services to maintain older codecs like h264/h265 to keep those customers) means it’s going to take a depressingly long time to be anything but a web streaming phenomenon I fear.


  • Majestic@lemmy.mltohomelab@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    6 months ago

    Disclaimer: I’ve not used that exact machine but have worked with similar Lenovo/Dell stuff.

    On HP’s spec sheet it says the max HDD size is 2TB. Do I need to do anything to the BIOS to allow bigger drives?

    Set mode to UEFI and/or GPT possibly. Some very old BIOS may simply refuse to boot off a drive that big while some may work as long as the boot stuff is in the first 2TB.

    I’ve heard it’s possible to add a third 3.5in HDD in the DVD drive bay. Can anyone confirm? Do you need a bay adapter or whatever?

    Often these form factors have a SATA plug for a DVD drive. Be aware that this one is usually only SATA 2 at best so slower than SATA 3 (only 3Gbps vs 6Gbps) and often only SATA 1 (1.5GBps) in fact given DVDs need significantly less than that. Not technically a huge limiting factor in anything but bursts and saturating the cache as mechanical hard drives are going to tend to struggle to get much above 300Mbps sustained write anyways but a consideration. I wouldn’t put a RAID drive on it if possible as RAID drives should be on SATA adapters of matching speeds.

    You can use a bay adapter and you can set the drive directly bare on the surface but it may induce vibrations and in theory for mechanical drives could shorten the life of the drive in addition to being annoyingly noisy. An SSD located there wouldn’t have this problem as it’s safe to set the SATA ones on a bare surface. Though if the SSD is getting heavy regular use you might consider still investing in some sort of heat solution like an aluminum dock for 2.5" drive to place it in and set that there.

    As far as if you really want to set a 3.5" spinning disk HDD there without paying for a dock, at least put rubber between it and the metal of the case. Either little rubber standoffs or a flat rubber pad. This may induce heat issues but should solve the vibration one at least.

    You can of course buy a PCIe SATA or SCSI card and connect to that to get higher speeds.

    The other questions I’ll leave to other people. Technically hardware RAID tends to come with lots of problems for home lab setups and software at the host OS tends to be more recommended as easier to recover with and less prone to various problems.





  • Probably the best choice if OP is dreading 11. Put it off, hope that in 3 years Linux support has matured even more for their use cases.

    MS support has used this software themselves in an edge case where they couldn’t get Windows to active properly.

    You have two options here:

    1. Enable the extended support (no pay needed with this software but if OP absolutely refuses to run it they can pay Microsoft money directly though it takes work to find where to do that at) and run on that for 3 years until 2028.

    2. Upgrade to LTSC IOT using the method they outline at the link there. Again they have two options, one is free, the other is following that guide but paying for a gray-market key (G2a for instance) for LTSC IOT which would avoid running this software on their PC but would mean paying someone some money for a corporate volume key they’re not technically allowed to sell. Which means support until 2032.