Neywiny 5 hours ago

> Seagate is leveraging its heat-assisted magnetic recording (HAMR) technology to deliver its 6.9TB platter. If you want to check out how Seagate's HAMR technology works, check out our previous coverage. In a nutshell, HAMR uses heat-induced magnetic coercivity to write to a hard drive platter.

Wow so heat assisted magnetic recording is using heat to magnetically record data. Incredible explanation.

  • retrac 3 hours ago

    Heating a ferromagnetic material lowers it coercivity -- the heat softens it and makes it less magnetically "springy". And if heated enough it fully loses its magnetization.

    So, to write, zap the area with a laser to heat it up. The coercivity is lowered. This lets a weaker magnetic field work to magnetize the area. This allows packing more densely, as the weak field will not affect the neighbouring cooler and higher coercivity regions.

    (I think.)

    This is not the first time lasers have been used to write to magnetic media. Magneto-optical discs (e.g. Sony's MiniDisc) were erased using laser heat. (MO discs also were read with laser, the ferromagnetic material used had different optical properties depending on magnetization.)

  • puzzlingcaptcha 3 hours ago

    I guess you could say that MiniDisc was the original HAMR format.

  • wmf 5 hours ago

    Yeah, don't try to learn science from Tom's Hardware.

  • _wire_ 5 hours ago

    Mofo magnets! How do they work? With heat?!

    • dylan604 4 hours ago

      We prefer steam over magnets especially since nobody knows how they work, but whatever you do, don't get the magnets wet!

jtokoph 5 hours ago

I wonder what the lifespan, error rate and speed of these drives are

  • Yokolos 5 hours ago

    Probably no different than current drives? Who would pay more for worse drives? Particularly in enterprise, where defect rates and error rates make a much bigger difference and quickly add up across such a large number of drives.

    • cookiengineer 5 hours ago

      > Probably no different than current drives? Who would pay more for worse drives? Particularly in enterprise, where defect rates and error rates make a much bigger difference and quickly add up across such a large number of drives.

      Western Digital would like to have a word about shingled magnetic recording drives.

      • anonymars 4 hours ago

        Ha, the ones they mixed in with conventional drives, while still giving them the same model names and numbers? That was a good time, thanks WD

      • gruez 4 hours ago

        SMR drives aren't worse in any of those metrics except random writes. Yes, people running NAS with them got screwed over, but for your typical use case of storing movies they're fine.

almosthere 5 hours ago

I just bought a 2tb SSD drive that's the size of a tictac container...

  • Octoth0rpe 5 hours ago

    Sure. And you paid, what, maybe $120? so, $60/tb. When seagate commercializes these, it'll be around $10/tb. My last seagate spinning disks for my nas were 20tb for $150.

    SSDs have a valuable place in the world, but so do spinning disks. Physical size isn't a concern for my nas (I mean, assuming we're talking < 300cm^3 for the whole setup..), but $/tb is.

    • commandar 5 hours ago

      Spinning rust still typically holds advantages for archival storage, as well.

      There was literally a headline on the front page here a few days ago re: data degradation of SSDs during cold storage, as one example.

      • saltcured 4 hours ago

        I missed that earlier post to ask a question that always bugs me... do SSDs, when powered on, actually "patrol" their storage and rewrite cells that are fading even when quiescent from the host perspective?

        Or does the data decay there as well, just as a function of time since cells were written?

        In other words, is this whole focus on "powered off" just a proxy for "written once" versus "live data with presumed turnover"? Or do the cells really age more rapidly without power?

        • stuxnet79 4 hours ago

          My understanding based on my readings of the previous post is there are no hardware level checks. SSDs need to be power cycled every so often and the integrity of the filesystem needs to be checked via something akin to zfs scrub. This should bs done on a monthly basis at minimum.

          If you are paranoid about your data and not relying on filesystem level checks from ZFS or Btrfs you should ptobably avoid SSDs for long term storage.

          • gruez 3 hours ago

            >My understanding based on my readings of the previous post is there are no hardware level checks

            There are "hardware level checks", it's just that they might assume regular usage. If your SSD is turned on regularly (eg. a few hours a day at least), your files are probably fine, even if you never read/scrub your rarely read files. If it is infrequently used, you're right that you probably have to do an end-to-end scan to make sure everything gets checked and possibly re-written.

            • Tempest1981 3 hours ago

              Is the same true for USB flash drives? Do they rely on the OS to scrub/refresh them?

            • kasabali 3 hours ago

              "probably" does the heavy lifting here.

              • gruez 2 hours ago

                I mean, obviously? SSDs and HDDs randomly fail for all sorts of reasons beyond random bitflips, so properly working ECC isn't enough to guarantee your files are "fine". Even if you're using something like ZFS, it's possible for the one of the underlying drive to experience ECC errors, and have another drive fail before that can caught. If your parity factor is 1 or less (eg. RAIDz1), you'll also experience data loss.

      • ofrzeta 3 hours ago

        I was researching that topic a little bit a while ago but with no usable outcome. The aim was to find out how to cope with SSDs as backups. Is it enough to plug them into a power connection once in a while so the firmware starts the refresh cycle? Do I need to do something else? How often does it need to be plugged in? Thankful for any pointers ...

      • Razengan 5 hours ago

        > Data degradation of SSDs during cold storage

        Why is that? I'd have expected solid-state electronics to last longer at low temperatures.

        Or is it precisely that, some near/superconductivity effects causing naughty electrons to escape and wander about?

        • khuey 4 hours ago

          They mean powered off, not physically cold. Electrons escape the NAND flash over time and if the device is not active it's not refreshing them.

        • kingstnap 4 hours ago

          It's not super conductivity but instead quantum mechanics.

          High capacity hard drives nowadays use heat and strong magnetic fields to write patterns into the platter. It's pretty stable just sitting around doing nothing.

          High density multi level NAND involves quantum tunneling a few electrons using a strong electric field into an electrically insulated bit of semiconductor. Over some time the electrons tunnel their way out, but usually this only ends up actually happening if too much writing damaged the insulation.

          • Razengan 3 hours ago

            Oh, so would actual cold (low temperatures) prevent/reduce that phenomenon?

        • arcanemachiner 5 hours ago

          By cold, I think they mean "powered off", not "low temperature".

    • PunchyHamster 3 hours ago

      Also far lower chance to lose your data on HDD if left in a shelf for 3 years

    • echelon_musk 3 hours ago

      Where are you buying 20TB HDDs for $150?

    • api 4 hours ago

      “Disk is the new tape” has been true for a while and will probably stay true.

      SSD also has longer term data loss issues when unpowered. Magnetic disk is still better in that respect too.

      • immibis 4 hours ago

        Tape is still half the cost per TB, but you have to be storing at least several hundred TBs to break even with the cost of a single tape drive. For two it's certainly well over a petabyte.

  • wmf 5 hours ago

    Flash is far denser than hard disks but as long as it's more expensive it's not that relevant.

1970-01-01 5 hours ago

>7TB to 15TB platters available from 2031 onward

Isn't 0.1TB a little too low? I'm sure if they only improved this little in 5 years the company would be in big trouble.

  • martinpw 4 hours ago

    From the article:

    these 6.9TB platters are still in development and are not planned to be used for another 5 years.

system2 3 hours ago

What is the theoretical limit of a standard-sized platter? ChatGPT thinks 50 TB max. Some forums say petabytes. Is there a known limit for it? I can't find much on the internet about the maximums.

  • retrac 3 hours ago

    A single iron atom can be a magnetic domain. So a surface coated with single-atom domains, spaced a few atoms apart. I would posit that's close to the 2D limit because of physics. They can't be directly next to each other or it's impossible to read or write them.

    So very roughly, about 1 bit per square nanometre. Which unless I'm dropping an order of magnitude (very possible) is about 10 petabits per square centimetre, and with about 300 square centimetres for a 3.5" platter that's 3 exabits or so per side of platter.

    Whether it will ever be possible to actually read or magnetize domains that small without interfering with the neighbouring domains is the question and no one knows. There have been several breakthroughs, like perpendicular recording, that have brought us much closer to the theoretical limit above, than anyone would have thought.

  • londons_explore 3 hours ago

    Practically we are nowhere close to the limit if you can record in 3d not just on the surface as all current drives so.

Razengan 5 hours ago

The perfect number.

The ideal has been achieved. We need go no further.

  • znpy 3 hours ago

    We just need to fit 420 platters per drive now