Author Topic: Fastest Network storage?  (Read 322 times)

2018-12-12, 13:28:09

jpjapers

  • Active Users
  • **
  • Posts: 1035
    • View Profile
We are looking at upgrading our work network to 10 Gigabit soon and I realised we are going to be limited by the read/write speeds of the server drives. Are there any issues using SSD's as fast drives for things like your asset library and current projects and using mechanical drives for backups?
Perhaps even m.2 drives on a pci card inside our file server backing up to a normal hdd?

What would you guys recommend doing?

2018-12-12, 14:40:58
Reply #1

Juraj Talcik

  • Active Users
  • **
  • Posts: 3398
  • Tinkering away
    • View Profile
    • studio website
I use only SSDs for few years now for active storage. I even use PCIe SSDs in file server as well. (Corporate Intel versions). Now 10GBe is still limiting to 1200 +/- MB/s so you won't get full use out of PCI-e drives, but since their benefit is only in continuous file transfer of large files you won't benefit in any way whatsoever when it comes to reading/writing files from 3dsMax/Photoshop/etc...  So you might stick with SATA drives and instead opt for higher capacity (or higher quality drive with better cache).

What issues would there honestly be :- ) ? SSDs are used everywhere including mega-clustered servers like Amazon,etc. There were no issues with them for like past 6-7 years.

Only thing that comes to mind is that many consumer based SSDs lack any form of powerloss protection. But you shouldn't count on that anyway and back-up to at least two sources (one detached).
talcikdemovicova.com  Website and blog
be.net/jurajtalcik    My Behance portfolio
lysfaere.com   Something new

2018-12-12, 15:23:29
Reply #2

jpjapers

  • Active Users
  • **
  • Posts: 1035
    • View Profile
I use only SSDs for few years now for active storage. I even use PCIe SSDs in file server as well. (Corporate Intel versions). Now 10GBe is still limiting to 1200 +/- MB/s so you won't get full use out of PCI-e drives, but since their benefit is only in continuous file transfer of large files you won't benefit in any way whatsoever when it comes to reading/writing files from 3dsMax/Photoshop/etc...  So you might stick with SATA drives and instead opt for higher capacity (or higher quality drive with better cache).

What issues would there honestly be :- ) ? SSDs are used everywhere including mega-clustered servers like Amazon,etc. There were no issues with them for like past 6-7 years.

Only thing that comes to mind is that many consumer based SSDs lack any form of powerloss protection. But you shouldn't count on that anyway and back-up to at least two sources (one detached).

Im confused by your first point. Are you saying theres no point in putting m.2 ssds in as they are limited by the network speed?
Which sata drives would you recommend? 850 Evo seems to have a 750MB/s read speed

The issues i was speaking about was SSD longevity and lifetime but looking at specs it seems that most are rated much higher in terms of writes than i initially thought

« Last Edit: 2018-12-12, 15:30:11 by jpjapers »

2018-12-12, 16:08:01
Reply #3

Juraj Talcik

  • Active Users
  • **
  • Posts: 3398
  • Tinkering away
    • View Profile
    • studio website
Indeed, most of them are rated very conservatively, but even those conservatives estimates are very hard to reach. I would absolutely not worry about this at all today.

850 Evo is still the best price/performance available in my opinion.

M.2 is just form factor what matters is the actual interface, PCI-e or SATA (SATA is both interface and form factor). Some drives like 850 Evo are available in form factor of M.2 or SATA, but the speed is the same because the internal interface is always SATA.
And yup, you got me right, PCI-e drives are much faster than even 10GBe network can use, it's 1200 MB/s vs 2700+ MB/s. But these super speeds are very workload specific and niche in nature. Copying huge amount of video/photography data ? Benefits. Loading tons of small textures and bunch of 1-2GB scenes ? No benefit at all.

So definitely good idea to stay with SATA drives for file server in meantime. Unless you can score some good deal :- ).

PCI-e drives are good idea for system storage but even there the performance benefit is often questionable as what trully matter is random access/write speed of small data, and that is where only Optane drives from Intel currently dominate, at exhorbitant prices (let's wait 2-3 more years before we jump at this).
talcikdemovicova.com  Website and blog
be.net/jurajtalcik    My Behance portfolio
lysfaere.com   Something new

2018-12-12, 16:10:02
Reply #4

jpjapers

  • Active Users
  • **
  • Posts: 1035
    • View Profile
Indeed, most of them are rated very conservatively, but even those conservatives estimates are very hard to reach. I would absolutely not worry about this at all today.

850 Evo is still the best price/performance available in my opinion.

M.2 is just form factor what matters is the actual interface, PCI-e or SATA (SATA is both interface and form factor). Some drives like 850 Evo are available in form factor of M.2 or SATA, but the speed is the same because the internal interface is always SATA.
And yup, you got me right, PCI-e drives are much faster than even 10GBe network can use, it's 1200 MB/s vs 2700+ MB/s. But these super speeds are very workload specific and niche in nature. Copying huge amount of video/photography data ? Benefits. Loading tons of small textures and bunch of 1-2GB scenes ? No benefit at all.

So definitely good idea to stay with SATA drives for file server in meantime. Unless you can score some good deal :- ).

PCI-e drives are good idea for system storage but even there the performance benefit is often questionable as what trully matter is random access/write speed of small data, and that is where only Optane drives from Intel currently dominate, at exhorbitant prices (let's wait 2-3 more years before we jump at this).

SATA is it, Ill aggregate all 4 network ports on the server too to increase the bandwidth :)
Ill Probably have a separate NAS for backups too.
Thanks for your help!

2018-12-12, 16:14:31
Reply #5

Juraj Talcik

  • Active Users
  • **
  • Posts: 3398
  • Tinkering away
    • View Profile
    • studio website
Heh yes, you can aggregate even 10GBe for the ultimate super-speed :- ). I actually wanted to do it as well but I saved money by only buying 8-port Switch and have no space for such luxury.

Which I btw regret, my 8-port Netgear drops 10gbe to 1gbe on random ports if all ports are being used at 10gbe fully !! It very easily overheats and struggles with performance.
So if you want to link-aggregate, buy much bigger switch than you think you need. It seams like waste of money but this is what "pro-sumer" switches are like.
talcikdemovicova.com  Website and blog
be.net/jurajtalcik    My Behance portfolio
lysfaere.com   Something new

2018-12-12, 16:21:53
Reply #6

jpjapers

  • Active Users
  • **
  • Posts: 1035
    • View Profile
Heh yes, you can aggregate even 10GBe for the ultimate super-speed :- ). I actually wanted to do it as well but I saved money by only buying 8-port Switch and have no space for such luxury.

Which I btw regret, my 8-port Netgear drops 10gbe to 1gbe on random ports if all ports are being used at 10gbe fully !! It very easily overheats and struggles with performance.
So if you want to link-aggregate, buy much bigger switch than you think you need. It seams like waste of money but this is what "pro-sumer" switches are like.

Im now looking at putting a dual sfp+ card in our file server and aggregating that instead haha. We already have a switch with sfp+ so its fairly high end

2018-12-12, 16:49:24
Reply #7

Fluss

  • Active Users
  • **
  • Posts: 395
    • View Profile
Heh yes, you can aggregate even 10GBe for the ultimate super-speed :- ). I actually wanted to do it as well but I saved money by only buying 8-port Switch and have no space for such luxury.

Which I btw regret, my 8-port Netgear drops 10gbe to 1gbe on random ports if all ports are being used at 10gbe fully !! It very easily overheats and struggles with performance.
So if you want to link-aggregate, buy much bigger switch than you think you need. It seams like waste of money but this is what "pro-sumer" switches are like.

I have the same issues with mine and that pisses me off! Unplug/replug incriminated ethernet port do the trick but wtf... Stay away from those ones

2018-12-12, 16:54:28
Reply #8

Fluss

  • Active Users
  • **
  • Posts: 395
    • View Profile
Heh yes, you can aggregate even 10GBe for the ultimate super-speed :- ). I actually wanted to do it as well but I saved money by only buying 8-port Switch and have no space for such luxury.

Which I btw regret, my 8-port Netgear drops 10gbe to 1gbe on random ports if all ports are being used at 10gbe fully !! It very easily overheats and struggles with performance.
So if you want to link-aggregate, buy much bigger switch than you think you need. It seams like waste of money but this is what "pro-sumer" switches are like.

Im now looking at putting a dual sfp+ card in our file server and aggregating that instead haha. We already have a switch with sfp+ so its fairly high end

Sfp+ is expensive as hell. I'd rather go with standard cheap cat7 or even cat6e base-T cables unless you plan to run 100+ meters of distance

2018-12-12, 17:12:06
Reply #9

jpjapers

  • Active Users
  • **
  • Posts: 1035
    • View Profile
Heh yes, you can aggregate even 10GBe for the ultimate super-speed :- ). I actually wanted to do it as well but I saved money by only buying 8-port Switch and have no space for such luxury.

Which I btw regret, my 8-port Netgear drops 10gbe to 1gbe on random ports if all ports are being used at 10gbe fully !! It very easily overheats and struggles with performance.
So if you want to link-aggregate, buy much bigger switch than you think you need. It seams like waste of money but this is what "pro-sumer" switches are like.

Im now looking at putting a dual sfp+ card in our file server and aggregating that instead haha. We already have a switch with sfp+ so its fairly high end

Sfp+ is expensive as hell. I'd rather go with standard cheap cat7 or even cat6e base-T cables unless you plan to run 100+ meters of distance

We already have sfp+ from our fileserver to our switch but a dual card would double the bandwidth which would be required if we are aggregating all 4 gigabit ports on each render node