Samsung has increased its marketshare from 2014 to 2015
" SSD retail prices have dropped from $1 per gigabyte (GB) to less than $0.40 per GB during the past three years, and DRAMeXchange expects them to reach $0.25 per GB later this year. As a result, 256 GB SSDs that cost $125 last year are now selling for about $85, nearly a 30% reduction"
Heres the article :
http://electronics360.globalspec.com/article/6831/samsung-is-extending-its-lead-in-ssds
-
Starlight5 Yes, I'm a cat. What else is there to say, really?
@3Fees excellent 256GB MLC SSDs are now selling for $60 and less. \=
-
I just want to know where are the darn consumer priced 4TB 2.5", and 2TB M.2 SSD's? I need more space! Seems like the market has been fairly stagnant the last year or so. Heck I'd even go with a hard drive at this point but they never came out with anything larger than the 2TB 9.5mm drives from years ago.
Spartan@HIDevolution and Starlight5 like this. -
Starlight5 Yes, I'm a cat. What else is there to say, really?
-
agreed! concerning M.2 drives, its all the thin & light crap / ultrabook´s fault, since most if not all manufacturers restrict themselves to one sided M.2 configurations, where double sided (= double storage capacity) would be "too thick" for super thin notebooks
im also waiting for higher storage 2.5" HDDs, my external 2.5" 4TB HDD is on its last 240 GB and closing in fast....!Last edited: Jun 12, 2016 -
Spartan@HIDevolution Company Representative
nothing interests me anymore except larger storage. All SSDs are fast benchmarks aside. It's been almost a year since I haven't been excited about any new SSD
hmscott, Porter, tilleroftheearth and 1 other person like this. -
im guessing the next interesting thing will be intel xpoint based optane drives. they wont change sequential speeds since were still gonna be "limited" by pcie 3.0 4x but 4K low and high QD performance should go through the roof, same with mixed write/read workload performance
keeping my 850 pro at least until then...
Sent from my Huawei Mate 8 NXT-AL10hmscott, tilleroftheearth and Starlight5 like this. -
tilleroftheearth Wisdom listens quietly...
When Optane gets here, it will be like moving to 10GbE switches from 1990's network 'hubs'.
Full duplex storage subsystem (for the average consumer), non-volatile memory, capacity and reliability that will finally challenge HDD's in a meaningful way and eventually, prices that will shove those spinning piles of rust over the cliff to be forgotten by anyone born in the last few years.
Today's half duplex (either read, or write, but not both consecutively without slowing down...) consumer storage subsystem platforms, with their relatively small capacities will not be missed.
This is not the evolution of SSD's. It will be their disruptive demise.
Now, come on Intel! Bring us Kaby Lake and Optane now!!! -
Tinderbox (UK) BAKED BEAN KING
-
Indeed, these companies need to get busy and offer Large SSD's , M2's and the like at consumer prices rather than trying to milk the cow constantly with small changes.
Now for the back breaker, Disk drive companies are 60 to 70 years behind in technology. Thats right ladies and gentlemen. The United States Government since the 1950's Census Bureau has been using Ram Drives. These Ram sticks are 4 to 6 feet long and plug in just like ordinary ram on a motherboard and take two people to insert in slots on huge computer face size of a room wall it has the operating system on it and updates the census every ten years.
Ram drives would speed up all laptops, ect to break neck speed.
Why have these so called disk drive companies not gone in this direction and made ram much larger by lithograph methods and called it Ram Drive, who knows, what I do know is those High wages these peoples get is unjust enrichment and this includes the assembler companies as well for not sticking up for us customers and getting the best for all of us.
1,2,4,5,10 terabyte ram sticks could be made, they aint, they keep milking the cow with small changes to get our monies. 180 pins, 240 pins, ect is much larger in bandwidth than a few pins on steroids.
Happy ComputingLast edited: Jun 14, 2016 -
The 4TB 850 EVO is already shipping... that's about as close as you are going to get...
-
Occam's Razor suggests something a bit more simple than a tin-foil idea. Rather, it's probably because spinning rust is the cheapest way to store data in terms of $/GB, next to magnetic tape (which is still the primary backup for most major businesses, so I guess they're even more out of date?
).
While RAM disks are stupid fast, you have to keep in mind that they still need some sort of alternate storage method unless you either don't care about the long-term storage of the RAM disk data or don't mind losing it if the power cuts out. Especially important for consumers like the Average Joe who usually aren't running off a UPS or similar.Starlight5 and alexhawker like this. -
Spartan@HIDevolution Company Representative
I have 64GB of RAM. I use about 5GB of it only.
Can you teach me how to make the most of it? when I tried Primocache I felt it was just a gimmick and I didn't feel my system snappier. -
Hell, if you want to try a RAM disk, I won't stop you
. Personally I think it''s a very neat technology that works well within its domain. I personally think SSDs are fast enough for Average Joes and myself. Maybe an idea would be to make a X GB RAM disk, load sample big data into it, and maybe play with data mining tools like Hadoop?
Personally, I have 16GB in my desktop and while my daily use is low, I tend to run a few development VMs for learning OSes, programming environments, testing server configurations, etc, and sometimes I need to compress large data sets (over 150GB) using a large dictionary size (which eats RAM during the compression stage).Spartan@HIDevolution likes this. -
I was thinking of trying a ramdisk for loading games into, to make load times go from a few seconds to less than a second
There are free ones out there, but there is also one on steam with a few bells and whistles. I've seen mixed messages about it but I think it does work, just not super user friendly.
Spartan@HIDevolution likes this. -
Not sure how that'd work if you plan on turning your computer off at any point. Software installation typically assumes non-volatile memory.
jaybee83 likes this. -
Not sure how it works but I would think it just points to the rammdisk "folders" rather than the stuff on the drive. Also you could just install it to the ramdisk, then save and load that huge file once per boot. I know I've done that before but it was many years ago when games were much smaller. It's nothing new, and yeah its not perfect or revolutionary otherwise everyone would be doing it.
-
Could definitely shuffle around images, though IMO an easier method for stupid-fast software loading/saving would be to have a RAID 0 SSD setup. Best of both worlds there (non-volatile yet high seq/random I/O); only downside would be the expense.
Symbolic links only point to another location (in this case, stuff in the RAM disk partition). Doesn't solve the volatility issue though; you'd have to fall back to the image shuffling idea for this to work like a normal drive. -
I've been running SSD RAID for a long time, I want faster lol! Actually I like to mess around and also try to use some of the extra ram I don't really need anyway. I might get that steam software on sale since its cheap and just to try it out. Even if it made some measurable difference in only a few games or software it would be worth it to me. An example would be those loading screens in 3dmark seem to take too long, if I could make those super fast I would be delighted.Spartan@HIDevolution likes this.
-
Depending on application and processor architecture, ram disks aren't actually faster than flash.
I spent 6 years at Fusion-io selling the fastest pci-e flash storage on the market and a real cool demo was to show people a 4-socket system with 1TB of memory running a database workload out of a ramdisk, vs a 2-processor box with 1TB of pci-e flash running 2x faster.
The problem is with a ram disk, the CPU has to handle all of the paging in and out of ram disk, back into regular ram, and on large multiprocessor systems, the NUMA overhead of swapping data between different banks of memory over the QPI links caused so much overhead that performance suffered.
A single flash drive connected to a single CPU was much much faster in that case.
Also, on the 4-socket boxes, when you fully populated the memory, it dropped from 1600MHz down to 800MHz, so you suffered a lot of bandwidth loss.Starlight5 and tilleroftheearth like this. -
I use SoftPerfect RAM disk. It's donation-ware and is simple to use and works like a charm. I use it to cache video streams for game capture (for things like ShadowPlay). I use 8GB of my 32GB on my desktop for it. Just prevents any additional wear and tear on SSD, and only writes to disk if I do a shadow capture.
alexhawker likes this. -
You aren't going to wear your SSD out with normal desktop workload stuff.
-
Depends on what "normal" means to a particular user. Yeah, Average Joes and even most "power users" (whatever that's supposed to mean) won't wear a SLC or MLC drive out so quickly, though if your workload involves stupid levels of I/O then I can see it, especially for TLC drives.
-
For stream caching, it's a constant write to SSD, depending how many minutes you make it, and if you want a shadow capture, it dumps the contents of the cache as a video to where you specify. So caching 3440x1440 @ 60FPS is a lot of data if you're playing for a couple hours at a time.Jarhead likes this.
-
3440*1440*(3 bytes per pixel) => 14,860,000 bytes per frame * 60FPS => 891,648,000 bytes per second, which is roughly 0.9 GB/s (assuming no image compression before hitting the SSD).
Taking the Samsung 840 EVO 120GB as an example TLC drive, Anandtech reports that it can deal with about 4340 GiB total writes before dying (4340 GiB * 1.07374 = 4660 GB).
http://www.anandtech.com/show/7173/...w-120gb-250gb-500gb-750gb-1tb-models-tested/3
4660 GB / 0.9 GB/s is a pretty pitiful endurance.
-
It's just that it isn't a big deal to set it to a RAM disk when I have the RAM available. Kind of like I can drive my car to the park two miles away or ride my bike. I'd rather not put the wear and tear on my car even if it is minor (well that and I get a little exercise).
-
I mean, looking at it, even if he's writing at 1GB/sec theoretically, the sata interface will limit him to ~400-500MB/sec write speed, but that drive only writes around 130MB/sec and then once he writes ~130GB and fills the physical flash on the drive, it'll be forced into steady-state write and perf will be limited by groomer performance, which will be ****.
It might not suit your workload, but at 120GB, you wouldn't be able to wear it out too easily... you'd get bored trying, that's for sure.
Ideally you wouldn't be using a TLC drive for steady-state write workloads anyways...
In related news, I happen to have 3 drives in my system that you can write to at 1GB/sec (each) just fine...wear life is rated to 17 Petabytes written (each) =)
I won't be wearing those out ever. -
Yeah, I glossed over those details, though even a high-performance M.2 TLC SSD will only last you so long with a 1GB/s workload.
-
I fear not wear-life in my desktop box...
fct1 Attached
1205.00 GBytes device size
Format: v500, 2353515625 sectors of 512 bytes
PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total
Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
Active media: 100.00%
Rated PBW: 17.00 PB, 98.92% remaining
Lifetime data volumes:
Physical bytes written: 183,169,313,018,784
Physical bytes read : 105,047,396,917,632
The crazy thing is, I used to sell these things at $25k for 1.2TB and now you can get them used on ebay for $500 with a TON of wear-life left =P
Good thing I'm selling 6.4TB cards and 512TB appliances now instead =) -
Yes, 4TB would be terrible. But the article uses that value (current writes on a functioning drive) and current wear level (3%, so somewhere between 2.5 and 3.4) to guesstimate the final, expected total at 152TB ( (4660 GB / 3) * 100). Real-life examples mention 242TB, 432TB and 800TB (and 2.4PB for the Pro). Granted, the 840s have their flaws, but they're not that bad ... mine ought have died several times over by now if they had a 4TB life expectancy.
-
With compression, the reality is that it's about 250-300MB per MINUTE using high quality settings. But still every hour that's only about 15-16GB per hour. But still, it's not a big deal to offset the load to RAM especially when you have 32GB and a hexacore CPU.
-
I should setup a ramdisk on my ESX box and do some encoding tests out of dram vs out of one of my ioDrives...
Samsung Is Extending Its Lead in SSDs
Discussion in 'Hardware Components and Aftermarket Upgrades' started by 3Fees, Jun 9, 2016.