|
I just bought a laptop couple of weeks ago but it'n not with SSD, i am planning to install my self one. Thanks for sharing it here, now i will surely install it in that machine.
|
|
|
|
|
Noone talked about reliability. While I am OK for a SSD for the operating system, mechanical discs are still much more reliable than SSD. And we did not talk about size/price ratio. So I think RAID with mechanical still the way to go.
|
|
|
|
|
I have been working with computers long enough to have seen an long series of placebos - some placebos so strong that they work even for people who don'b believe in them.
An old one is from the 386 days, when you had to add an '87 chip (and have a motherboard prepared for it) for floating point hardware: I knew lots of people claiming that the speed of compilation increased significantly after plugging in an '87. A compiler most certainly does not make use of floating point instructions!
A more recent one is when people "speed up" their PC by adding another 8 GB to double the RAM size to 16 GB: When I hear this (from a home user - professional use is different), I ask to see the resource use in the Resource Monitor, often to see that less than one fourth of it is actually in use. (And you know that "in use" certainly is no "active working set" - a page that was last addressed ten minutes ago and would take five milliseconds to fetch anew, even on a magnetic disc, is still counted as "in use".)
Well, SSD is certainly not a pure placebo: It significantly speeds up the startup of programs, especially those initially loading a lot of resources from a multitude of files (but less so for programs loading resources "lazily", on demand). First time you open, say, one of the MS Office applications, after having installed an SSD, you are in your right to exclaim: Wow!
That's where the placebo comes in: Because it starts up so fast, you have a distinct feeling that all subsequent functions are much faster as well. 99% of that is pure psychological. What is needed in memory, is in memory. Maybe a couple disk pages are read now, a few then. Even if there is a physical access, maybe 5 ms is shortened down to less than 1 ms - but remember that all modern magnetic disks have ample sized RAM caches nowadays, so even a megabyte write doesn't have to wait for the rotation or disk arm. The cache is used for prefetching reads as well, and do it quite successfully with NTFS as long as your disk is reasonably defragmented: A large fraction of reads done after program startup are sequential reads of the next block in file - and the block is found in the disk cache.
Many high-data-volume applications are also real time in nature: Even though your video player spins through a few megabytes a second when playing a high-def movie, the movie won't play faster from an SSD - and any magnetic disk of this millenium has plenty of speed to keep up. Also, those making applications handling huge data volumes (such as non-linear video editors) have their roots in an age when double buffering were required, and previews in resoultion limited to the actual winwow size was mandatory - even on a 386 with disks from the 1990s, non-linear video editing went fairly smoothly. Those techniques are still in place, even though the "need" for them dissapeared at least ten years ago, long before SSDs.
SSDs give a true speedup of first-time reads of huge amounts of disk data, such as starting up an application of several multi-megabyte DLLs. Some software buffer a huge amount of file writing until you have finished your work and exit to do something else - you don't have to wait for the writes to complete.
Third: If you regularly copy multi-gigabyte files between disks - but then, both source and destination disks must be SSD and internal; otherwise the interface will be a limiting factor (most certainly so with USB 2.x, but it goes for USB 3 as well). How often do you move multi-gigabyte files between you internal disks? (Within the same physical disk, it isi just a directory update that goes in a snap.) How often you you copy them to an external USB 3 disk?
Yet, these are cases where SSB can show a significant speedup. But (the fast program startup in particular) those special cases make people "have a feeeling of" everything going a lot faster, when in fact it is totally unaffected by the switch to SSD.
|
|
|
|
|
Anything connected with file i/o will be faster. The act of loading an app, or a data file for an app, will benefit when it's on a SSD. How the apps process after that will not benefit from a SSD UNLESS the system doesn't have a lot of RAM. At that point, disk paging will benefit (as long as the paging destination disk is also a SSD).
For raw speed, you want a PCIe nVME drive. It's alll about the width of the bus (and a PCIe nVME drive will always be faster than a SATA SSD).
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
John Simmons / outlaw programmer wrote: Anything connected with file i/o will be faster. Not unconditionally. If your rotating disk has a 64 MB RAM buffer (which is not uncommon today), you can write several MB to the disk without waiting for the physical disk access to complete. For piecewise, but sequential, read of a file, either the disk itself or the OS may read a much larger chunk of data - typically: a physical disk track, or an entire NTFS extent (up to a certain maximum) - into a cache, so that your next 'n' I/O-operations do not access the disk at all.
From interactive user perspective: More advanced applications do disk I/O in a background thread. Especially for writes, the user need not know when the operation is complete, need not wait, but continue with his next operation. CPU bound systems using double buffering need not be delayed by I/O at all: If processing a disk page requires 10 ms, and the next page is fetched in parallel, it makes no difference if the fetch completes in 5 ms or 500 uS, when there still is either 5 or 9,5 ms of processing left for the previous page. The total time limited by CPU (or GPU or communication line speed or ...), not by disk I/O.
It would be more correct to say that any operation that is disk I/O bound, and leads to a physical access to the disk (that is, depending on arm position and rotation - not access to the disk's RAM buffer), will be speeded up. But those situations are really few and far between, except at startup of a program that insists on loading several huge DLLs and accessing them all over before giving control to the user.
Even for program startup: Remember that an .exe or .dll is accessed as a memory mapped file: The page table entries are set up to point to the file pages, but the pages are not read into (main) RAM until actually accessed. If the .exe or .dlls are so huge that setting up the page tables takes a whole lot of time, there is very little difference between SSD and magnetic disks, especially on a reasonably defragmented NTFS file system.
I have been working on systems where the code designers were very careful to gather everything required for startup and initialization in as few disk pages as possible, to minimze paging before the user got control. A fairly cheap way to increase user satifaction
|
|
|
|
|
This all works wonderful as long as you don't get into the UEFI vs. legacy boot wars. I'm not even sure one can blame this on Microsoft, but following it's tradition, Microsoft's support web site is full of useless gibberish from support people. I really think it's an early Microsoft AI engine posting solutions....
In any event, if you clone your spinner to an SSD, the SSD might not boot, actually displaying an assortment of exception conditions.
Charlie Gilley
<italic>Stuck in a dysfunctional matrix from which I must escape...
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
|
|
|
|
|
Haven’t had that problem, yet.
|
|
|
|
|
Completely unexpected on my part, had to go off on a research project - wtf is UEFI (or whatever). All sorts of contradictory answers, and of course, Microsoft folks chime in with "you'll just need to re-install Windows". Uh not happening....
Charlie Gilley
<italic>Stuck in a dysfunctional matrix from which I must escape...
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
|
|
|
|
|
This might help. UEFI can boot from either MBR or GPT partitions but BIOS can only do MBR (I think).
UEFI and partition types
|
|
|
|
|
When I swapped to an M.2 SSD, I installed W10 over the course of a commercial break.
3 minutes from cold boot to interactive use on a new install; SATA-based can't even come close to that.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
Linux is even faster. I have a full ubuntu install on a laptop with a nVME drive, and from power-on to login prompt, it boots in about 30 seconds (even faster if the USB drive is plugged in, because grub insists on looking for it for at least 10 seconds before it times out and continues the boot process.
On another machine, I'm running a minimal Lubuntu (also on a nVME) - less than 10 seconds on that one.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
You misunderstand. I installed windows in that time frame.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
I agree. I have an m.2, 4x pcie SSD and it benchmarks three times faster than a SATA SSD on the same computer.
|
|
|
|
|
Scary how dang fast the pcie drives are...
Charlie Gilley
<italic>Stuck in a dysfunctional matrix from which I must escape...
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
|
|
|
|
|
My backup strategy is pretty simple: Invoking Robocopy from a small PowerShell script that works out the logic of identifying the backup drive, as its letter might not always be the exact same depending on what else I might happen to have hooked up. I like Robocopy because it's as simple as a recursive "copy star-dot-star", and it automatically skips anything that hasn't changed. Also, having a straight copy of the file system means I have direct access to any file, without relying on some BLOB that can only be accessed by some proprietary backup software that has to be installed on whatever system I hook up the backup drive to.
In any case…I have 2 backup drives (normally, one offline, sitting next to the computer, and the other at the office for monthly rotation). Both are in external USB enclosures. Both have started exhibiting the same problem where the machine would stop reading (writing?) at the same point on the same file (some large-ish file, but otherwise totally random in the sense that there’s nothing that sets it apart from plenty of others). CPU activity settles down to nothing, same with disk reads/writes, then I can’t kill Robocopy with Ctrl-C or kill the PowerShell session with Task Manager. Logging out/rebooting can’t “cleanly” terminate the process either, as it just sits there forever (I've let it run overnight) and I have to hard-reset the system. Or, yank out the USB cable--then the system comes back to life immediately. Not good—I fully expect this sort of thing to eventually result in corrupt files/file systems if I have to keep doing this.
After rebooting (or re-inserting the USB cable), if I restart the same copy operation, it may manage to proceed further, or get stuck at the same point on the same file again. With enough retries, I suppose I could complete the full backup eventually, but obviously that’s not the way to go.
Copying the offending file with Explorer to the backup drive again results in the same thing – it eventually just stops (doesn’t complain about any read/write error, just no further progress is made).
Everything at this point is suggesting a bad source file. But if I move the file elsewhere or delete it altogether, then the same thing happens again with the next large-ish file it might encounter. Or it might not.
I ran chkdsk /f on the source and both target drives. No problem whatsoever. I don't know how thorough chkdsk is, but it claims everything is squeaky clean.
The crazy solution? Hook up the backup drive to another system, and run the backup through a share across the LAN, instead of directly from local (internal) drive to local (USB) drive. Then all files can be read and the full backup completes without even a hint of any sort of slowdown at any point.
Since I'm copying across the LAN with the same USB cable, the only thing I have left to blame is the source computer's USB port. There's only one at the front, and the ones at the back are unfortunately rather difficult to reach (it's a lousy physical setup), but I'll have to try that just to confirm.
In the meantime...if I get the same thing through another port...what else is left I should I be looking for?
(and yes, the anti-virus remains completely quiet)
|
|
|
|
|
|
Any backup software that creates a large file that can only be opened by said software is a non-starter in my book.
|
|
|
|
|
"There can only be one!"
I'm afraid that's the way most backup programs work ...
|
|
|
|
|
...and that's why I don't use any of 'em.
|
|
|
|
|
1. perhaps there is an error. Default in Robocopy:
/R:n :: number of Retries on failed copies: default 1 million.
/W:n :: Wait time between retries: default is 30 seconds.
Try /R:zero (use 0, it gave me a happy face)
maybe you tried this already?
2. File or directory size greater than 4GB?
If you can keep your head while those about you are losing theirs, perhaps you don't understand the situation.
|
|
|
|
|
My script invokes Robocopy with
/MIR /R:0
My backup set includes files that are tens of GBs in size, and I haven't seen any problem with those. I suppose I shouldn't have said that it only happens with large files, but rather, that I haven't seen it happen with small files.
|
|
|
|
|
The solution kinda makes me wonder if it's a memory fault, rather than a disk / usb problem.
What happens if you try copying a large file repeatedly onto the same disk? A quick Powershell - logging each time it completed each copy so you can see any slowdown - and leave it running overnight?
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
OriginalGriff wrote: The solution kinda makes me wonder if it's a memory fault, rather than a disk / usb problem.
Even if--for a given file--it either doesn't happen at all, or always happens at the same place, even after many reboots? :-/
OriginalGriff wrote: What happens if you try copying a large file repeatedly onto the same disk? A quick Powershell - logging each time it completed each copy so you can see any slowdown - and leave it running overnight?
I'm tempted to run sdelete from SysInternals and have it fill all free space to see what happens. But that's a good idea as well. I'll follow up if I go ahead with it.
|
|
|
|
|
Have you looked in the Event Log for any hardware errors. Does sound to me like it's the USB in your computer.
|
|
|
|
|
Rather thoroughly, yeah. I can see the events complaining about the unexpected reboots (of my own creation), but nothing about hardware faults, unfortunately.
|
|
|
|
|