🚀 Elevate Your Storage Game!
The HGST Deskstar NAS 3.5-Inch 6TB Internal Hard Drive is engineered for high-performance storage solutions, featuring a 7200 RPM rotational speed and a robust 128MB cache, making it ideal for NAS environments. With a generous 6TB capacity, it provides ample space for all your data needs while maintaining a lightweight design for effortless integration.
Standing screen display size | 3.5 Inches |
Brand | Western Digital |
Series | Deskstar NAS |
Item model number | HDN726060ALE610 |
Hardware Platform | PC |
Item Weight | 1.58 pounds |
Product Dimensions | 4 x 5.79 x 1.03 inches |
Item Dimensions LxWxH | 4 x 5.79 x 1.03 inches |
Color | Silver |
Flash Memory Size | 6 TB |
Hard Drive Rotational Speed | 7200 RPM |
Manufacturer | HGST - BRANDED |
ASIN | B00O0M5QK8 |
Is Discontinued By Manufacturer | No |
Date First Available | August 3, 2017 |
M**G
Solid performers, drive-seeks are noisy
UDPATE 7/13/2016:Drives have been solid and still running 24/7 since purchased well over 1 year ago. S.M.A.R.T. values for all 8 disks are great. No complaints.I bought 9 of these drives; a few from Amazon and rest from other places to mix up the lots and dates of manufacture. 8 of the drives are in a RAID10 (3 sets of 2 disk span) configuration controlled by a dedicated LSI MegaRAID SAS 9361-8i PCI-Express 3.0 x8 12Gb/s (future proofing). The stripe defined is 256kb since will be storing mainly large files. However, benchmarking was done using single disk, 6-disk (3 sets of 2 disk span) and 8-disk (4 sets of 2 disk span) array configuration.Due to some unmentioned system limitations, the drives are set up in the same box as my ESXi host, a disk container created and attached to a guest machine that serves as my dedicated NAS; said guest machine runs CentOS 7.1 for an operating system.The filesystem chosen is XFS, as it scales well with large storage arrays. First, the partition will consist of the entire array size available. It was configured using the following set of commands (Note about "print" read-out below. I added pipe "|" separators as Amazon stripped out my formatting):# parted -a optimal /dev/sdx(parted) mklabel gpt(parted) unit s(parted) mkpart primary xfs 2048s 11721043967s(parted) printNumber | Start | End | Size | File system | Name | Flags1 | 2048s | 11721043967s | 11721041920s | xfs | primary |Verify alignment:(parted) align-check optimal 11 alignedSave/write partition:(parted) quitNOTE: For optimal/best performance, specify the 'optimal' command so the partition alignment boundary is in multiples of 1MiB (1024x1024); which then the starting sector (2048), the ending sector, minus 1 (11721043968 - 1) and the RAID stripe size (256) are all evenly divisible.The filesystem was created specifying the block-size, stride and stripe-width options for optimal performance with the array and the partition created. The command is as follows:# mkfs.xfs -b 4096 -E stride=64,stripe-width=256 /dev/sdx1NOTE: Your numbers will be different. I highly recommand using a stride and stripe-width calculator available on The Web. You can verify the filesystem attributes once it is created using either of the commands below:# tune2fs -l /dev/sdx1# dumpe2fs /dev/sdx1 | less--------------------------------------SETUP:Keep in mind that many factors will effect performance - hardware capabilities/limitations, hardware controller configuration, RAID array stripe size, NIC card performance, caching, kernel I/O scheduler (Linux), the file-system format and options you choose (Linux - see below at 'stride' and 'stripe-width'), disk mounting options defaults/user-specified (Linux), router/switch, etc.I do not run actual server hardware, but a re-purposed micro-ATX 'desktop' converted to an ESXi host with various guest machines that draws an average ~144 watts. Here is the complete set-up:Fractal Design Node 804 Case3 x Included fans that came with the Fractal Design Node 804 case3 x Noctua SSO2 Bearing, Retail Cooling NF-F12 iPPC 2000 PWMIntel i7-4790S Haswell Quad-Core 3.2GHz LGA 1150 Z87 65WASUS MAXIMUS VI GENE LGA 1150G.SKILL Ripjaws X Series 32GB (4x8GB) DDR3 2133Samsung 850 Pro 512GB SSDCorsair RM750i 750 watt PSU (Had on-hand)CyberPower CP1500PFCLCD PFC UPS (supports my Corsair's "Active PFC")LSI 9300 MegaRAID SAS 9361-8i PCI-Express 3.0 x8 12Gb/s2 x LSI LSI00410 0.6mm Internal Cable SFF8643 to x4 SATA HDD8 x HGST Deskstar NAS 6TB 7200 RPM SATA 6.0Gb/s 128MB CacheFor LAN benchmarks, the host machine is wired using CAT6 to an enterprise-class Ubiquiti EdgeRouter. Benchmarking includes scores from single disk, 6-disk and 8-disk RAID10 array.------------------------------------------BENCHMARKS:NOTE: These tests go a little deeper than your typical reviewer to show various scenerios and how these disks perform. If you don't understand q-depths, block-sizes, etc., how and when you'll generally face them - then I suggest you stick to the "HDParm" and "DD" scores below if you are looking for general performance scores.Samba 4 transfer of a 2GB file to another wired computer [Windows 8.1]:Sequential Read: 113 MBpsSequential Write: 113 MBpsThe theoretical maximum sequential transfer over Gigabit Ethernet is 125MBps (1000/8=125). Pretty darn close. I have tried tweaking SMB socket option values for my Samba configuration, increased the kernel read-ahead cache but it would appear that I have hit my ceiling.---------- HDParm ----------I ran HDParm 5 times and took the average:"hdparm -Tt /dev/sdx"(single disk)Cache read average: 16639 MBpsAverage = 200.444 MBps(6-disk array)Cache average = 16817 MBpsAverage = 632.578 MBps(8-disk array)Cache read average: 16538.47 MBpsAverage = 824.900 MBps------------ DD -------------I ran "dd" with the following parameters ("/temp/testfile" = RAID array disk partition mounted to /temp; "conv=fsync" to allow for synchronous IO; "rm /temp/testfile" to remove test file after completion):"dd if=/dev/zero of=/temp/testfile bs=1G count=12 conv=fsync; rm /temp/testfile"Single disk performance: 208 MB/s6-disk RAID10 array sequential performance: 621 MB/s8-disk RAID10 array sequential performance: 812 MB/s------------ FIO -----------Important note: FIO works at a low-level. Be sure that you specify a 'filename' or else tests will be written over your filesystem; most likely causing filesystem corruption.FIO is great for providing storage performance statistics based on more real-world scenerios with the abundance of options available, which I'll run through some below. NOTE: This part of the review assumes that you have the technical knowledge to understand the concepts being described.For the first performance test below it is a pretty straight-forward sequential read test based on a block-size of 4k, 4 worker processes, an io-depth of 4, a transfer of a 2GB file with a runtime of 60 seconds. Here is the command I used:"fio --filename=/path/to/file/on/disk --direct=1 --rw=read --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=4k --iodepth=4 --numjobs=4 --runtime=60 --group_reporting --name=4k_seq_test --size=2g"(single disk)read iops = 41995read trans. avg. = 167983 KB/s (167.983 MBps)(6-disk array)read iops = 126,167read trans. avg. = 504668 KB/s (504.668 MBps)---------Change to write-mode '--rw=write':"fio --filename=/path/to/file/on/disk --direct=1 --rw=write --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=4k --iodepth=4 --numjobs=4 --runtime=60 --group_reporting --name=4k_seq_test --size=2g"(single disk)write iops = 41730write trans. avg. = 166920 KBps (166.92 MBps)(6-disk array)write iops = 105,437write trans. avg. = 421750 KB/s (421.75 MBps)---------Now to get a mix of read and writes together (70% read, 30% writes) with various block sizes and q-depth value changes, the following command is a RANDOM read/write (70% reads, 30% writes. Note the size option value has been shrunk to 512MB, the "--rw" option value has changed to "randrw" and the option "--rwmixread=70" has been added):"fio --filename=/path/to/file/on/disk --direct=1 --rw=randrw --rwmixread=70 --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=4k --iodepth=4 --numjobs=4 --runtime=60 --group_reporting --name=4k_rwmix70-30_test --size=512MB"(single disk)read iops = 284write iops = 121read trans. avg. = 1137 KB/s (1.137 MBps)write trans. avg. = 484 KB/s (0.484 MBps)(6-disk array)read iops = 2255write iops = 971read trans. avg. = 9022 KB/s (9.22 MBps)write trans. avg. = 3886 KB/s (3.886 MBps)(8-disk array)read iops = 4456write iops = 1906read trans. avg. = 17827 KB/s (17.827 MBps)write trans. avg. = 7626.5 KB/s (7.6265 MBps)----------Changed block-size from 4k to 8k:"fio --filename=/path/to/file/on/disk --direct=1 --rw=randrw --rwmixread=70 --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=8k --iodepth=4 --numjobs=4 --runtime=60 --group_reporting --name=4k_rwmix70-30_test --size=512MB"(single disk)read iops = 294write iops = 127read trans. avg. = 2359 KB/s (2.359 MBps)write trans. avg. = 1023 KB/s (1.023 MBps)(6-disk array)read iops = 2479write iops = 1068read trans. avg. = 19834 KB/s (19.834 MBps)write trans. avg. = 8551 KB/s (8.551 MBps)----------Changed block-size to 16k:"fio --filename=/path/to/file/on/disk --direct=1 --rw=randrw --rwmixread=70 --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=16k --iodepth=4 --numjobs=4 --runtime=60 --group_reporting --name=4k_rwmix70-30_test --size=512MB"(single disk)read iops = 293write iops = 125read trans. avg. = 4691 KB/s (4.691 MBps)write trans. avg. = 2014 KB/s (2.014 MBps)(6-disk array)read iops = 2398write iops = 1029read trans. avg. = 38373 KB/s (38.373 MBps)write trans. avg. = 16470 KB/s (16.47 MBps)-----------Changed to run sequential read and writes (70% reads and 30% writes)(rw=rw, size=2gb, bs=8):"fio --filename=/path/to/file/on/disk --direct=1 --rw=rw --rwmixread=70 --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=8k --iodepth=4 --numjobs=4 --runtime=60 --group_reporting --name=4k_rwmix70-30_test --size=2g"(single disk)read iops = 1515write iops = 650read trans. avg. = 12127 KB/s (12.127 MBps)write trans. avg. = 5207 KB/s (5.207 MBps)(6-disk array)read iops = 41615write iops = 17909read trans. avg. = 332920 KB/s (332.92 MBps)write trans. avg. = 143272 KB/s (143.272 MBps)-------------Changed q-depth to 16:"fio --filename=/path/to/file/on/disk --direct=1 --rw=rw --rwmixread=70 --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=8k --iodepth=16 --numjobs=4 --runtime=60 --group_reporting --name=4k_rwmix70-30_test --size=2g"(single disk)read iops = 2087write iops = 899read trans. avg. = 16700 KB/s (16.7 MBps)write trans. avg. = 7193 KB/s (7.193 MBps)(6-disk)read iops = 41062write iops = 17667read trans. avg. = 328501 KB/s (328.501 MBps)write trans. avg. = 141342 KB/s (141.342 MBps)-------------Changed q-depth to 32:"fio --filename=/path/to/file/on/disk --direct=1 --rw=rw --rwmixread=70 --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=16k --iodepth=32 --numjobs=4 --runtime=60 --group_reporting --name=4k_rwmix70-30_test --size=2g"(single disk)read iops = 1283write iops = 554read trans. avg. = 20535 KB/s (20.535 MBps)write trans. avg. = 8871 KB/s (8.871 MBps)(6-disk disk)read iops = 31552write iops = 13524read trans. avg. = 504839 KB/s (504.839 MBps)write trans. avg. = 216388 KB/s (216.388 MBps)----------- SUMMARY -----------These drives have been rock-solid running 24/7. They are quite noisy when 'active' but personally, a moot 'issue' for me since it's located with multiple servers running 24/7. Drives are rated for 1M hours MTBF and have a 3 year warranty; really wish these were 5 year. I guess that if the Backblaze hard drive reliability reports can paint a relative picture to what can be expected of these drives - hopefully can mean that these will run reliably for some time.
C**I
Fast drive, but noisy
Note: what I got is actually the new version (HDN726060ALE614). I think the old version (HDN726060ALE610) has been out of production, so I wonder if everything here is actually the new version. But of course, I don't know...Pro:The drive is brand new, and is in its original, unopened box. My copy was produced in May 2018, so pretty recent. All S.M.A.R.T. data seems to indicate that the drive is new. The performance is great. HDTune benchmark got ~200MB/s on outer tracks, ~100MB/s on inner ones, ~160MB/s on average, ~13ms seeking time on random access. I've also performed a full surface scan, no bad sector. Many reviews report great reliability on HGST drives. This model has a specification of 1.5M hr MTBF, 600K cycles, if I remember correctly.Some disadvantages:1. This is a really NOISY drive. I mean REALLY, especially on seeking. I believe it also has a background media scan (BMS) feature, so there will be a seeking noise every ~5 seconds. I don't care a lot about noise, but if you do, think again before purchasing this drive.2. This drive disperses a lot of heat. It can easily reach 50C (122F) after just running for a while. But in my case, it does not go over the designed temperature range (60C). The highest temperature that I logged till now is 56C (133F), after running this drive for ~10 hours in an aluminum HDD enclosure. I guess running this drive in a PC / NAS will have less heating issues.
D**R
HGST Deskstar NAS 4 TB 128 MB Cache: A Pretty Good Drive
Just purchased my second drive for my home-based file server. The drives are a little bit noisier but performance is better than my WD Red Nas 2TBs that I'm moving to another unit I'm building for the office. Hitachi/HGST is a drive I've seen hundreds of times in my line of work. Of the piles of defective drives warranted out & replaced for end users, I've only seen a couple of Hitachis. The rest were WDs and especially Seagates; just a lot of them. In 2013, WD acquired a controlling interest in HGST with full ownership by October 2015. Since then, WD has been slowly integrating HGST manufacturing technologies into WD drives .. a good thing. Digging into drive performance comparisons between HGST enterprise and WD Gold enterprise drives, write performance curves, latencies, power consumption (etc.) are almost identical between the two. Reliability tests (see Backblaze) indicate HGST to have surprisingly low failure rates when compared to Seagate and WD. A lot of research went into my purchase. I didn't buy for performance. For pure performance of "like" drives, Seagate gets the 'nod' .. its buffering architecture (writes directly to the platter) outperforms WD/HGST partitioned-platter (buffering) technology. But my ability to discern performance differences are nil when my focus is instead .. on pure reliability. So WD/HGST earns my top purchasing spot. I'm willing to forgive a little loss of speed or a little extra noise for some good old-fashioned reliability. Lastly, for those that think WD Gold beats HGST .. remember, WD owns both manufacturing processes and performance testing bears strong evidence that the HGST Deskstar 128 MB Cache 7200 RPM enterprise NAS drives are hiding under the WD Gold label. So my suggestion would (for a 4 TB drive) be to spend the $130 on the HGST instead of the $175 Gold. I can't find a single business reason to misspend the extra $45. Happy New Year.
M**K
HGST campione di affidabililtà, ma non sempre...
Mi è capitato di averne uno che si è guastato dopo tre mesi, da allora ne ho altri cinque, da più di tre anni. Nessun calo di performance, nessun failure. HGST è il re della stabilità, ci si può solo augurare che con il passaggio a WD si mantenga questa qualità elevata.
C**T
Très bon disque
Très bon disque, livré rapidement. Je l'ai testé avec DriveDx sur Mac pour vérifier les données S.M.A.R.T. : aucun souci, il est parfait, tout neuf et sans aucune erreur. Faites ce test à chaque réception de disque dur et renvoyez-le si le résultat est douteux.J'ai passé plusieurs heures à me renseigner sur la marque de disques durs la plus fiable, et c'est HGST (anciennement Hitachi). Regardez les compte-rendus annuels de Backblaze qui recensent les disques des constructeurs les plus et les moins fiables. Je stocke des rushes vidéos en 4K et j'ai besoin de beaucoup d'espace et de fiabilité dans le temps. J'avais acheté 4 disques Seagate 3 To il y a 5 ans, et 1 est quasiment mort, et un autre en train de mourir.Ce n'est pas le disque le moins cher, mais si la fiabilité de vos données est plus importante que le prix (par ailleurs tout à fait raisonnable), c'est un très bon choix. Idéalement achetez-en deux pour la redondance des données.J'en achèterai sûrement un second prochainement.
J**O
Calidad a buen precio
Buena calidad y buena velocidad, es algo mas ruidoso que las marcas convencionales, sin llegar a molestar. Lo compre pues decian varias reviews que tras años de uso son los mejor valorados al no fallar casi, el tiempo dira...
R**I
Arbeitet zuverlässig in Synolgy NAS (DS218play)
Habe mir diese einzel-verpackten Platten vor etwa einem halben Jahr direkt bei Amazon.de für meine DS218play bestellt. Die Platten sind bei Zugriffen relativ laut, das ist wahr, man sollte sie definitiv nicht verwenden, wenn die NAS im Schlafzimmer steht. Wenn im Arbeitszimmer sonst alles still ist, sollte eine mit diesen HGST bestückte NAS auch dort nicht untergebracht werden. Am besten steht das NAS auf einem schallisolierten Untergrund irgendwo kühl und trocken.Bisher verrichten die beiden Platten ihre Arbeit zuverlässig im SHR-Verbund (bei nur 2 Platten ist das mehr oder weniger RAID 1). Ihre Temperatur bei Betrieb liegt zwischen 37°C und 45°C, je nach Umgebungstemperatur. Bei mir fährt das NAS ca. 2x täglich hoch, die Festplatten werden etwa 7x-10x täglich gestartet oder gestoppt (spin-up/-down).Testergebnisse und SMART-Werte als Screenshot:- Lesen ohne Dateisystem und Netzwerk-Overhead: ca. 200MB/sec- Seq. Schreiben läuft mit 70MB/s bis zu 108MB/s.Das ist im NAS voll okay, da das Netzwerk häufig die eigentliche Bremse ist. Dateiindexierung und gleichzeitig Video angucken funktioniert ohne Ruckeln, auch wenn die Platte dann sehr gut hörbar ihre Arbeit verrichtet.
A**N
Perfekt für mein Raid
Die HGST Deskstar NAS 8TB bietet den Mittelweg aus einer hohen Leistung und niedrigeren Leistungsaufnahme. Sie ist schnell (rund 220MB/s), nicht übermäßig warm, recht genügsam was den Stromverbrauch angeht und preislich durchaus attraktiv.Da HGST Platten zudem in der jährlichen Hard Drive Failure Rate Comparison von Backblaze immer mit Abstand am besten abschneiden, war meine Entscheidung sicherlich auch langfristig die Richtige.
Trustpilot
1 month ago
1 month ago