|
- A drive is generally a physical device that is, or behaves like, a random access disk.
- A logical drive is a portion of a physical disk that is managed by software so it appears to the user as a physical drive. A logical drive is addressed as if it starts at address zero, even if it is physically located elsewhere on the device.
- A partition is some portion of a physical disk that may be managed as a logical drive or raw device.
For example a physical disk with 1000 sectors could be partitioned as follows:
0 - 50 Partition 1 : raw drive used by Operating System
51 - 250 Partition 2 : logical drive C
251 - 990 Partition 3 : logical drive D
991 - 999 Partition 4 : raw drive used by Operating System
|
|
|
|
|
Wholesale Spice
[url=http://wholesalespicesandseasoning.com]Wholesale Spice[/url]
|
|
|
|
|
Shot in the dark someone might have ideas.
Toshiba Satellite won't turn on anymore. AC led comes on, battery led comes on (when AC is on only), but power switch does nothing at all. Fan doesn't come on, HDD doesn't spin up. Nothing.
Ideas?
|
|
|
|
|
Does it start without, or with a different battery?
|
|
|
|
|
Don't have a different battery to try it with, but it won't start without.
This morning, power led is on, battery charging led is on. So power is actually getting into the system. Still nothing else.
On/Off switch?
Tempted to try a drop test.
|
|
|
|
|
That rules out a faulty battery as the cause.
So you're getting power into the system.
But if it is without a single beep, that means the post isn't running at all, which basically gives you three choices:
0. Faulty BootROM
1. Faulty RAM.
2. Broken motherboard.
3. Broken CPU. (actually only heard of this twice ever)
Check the battery if existing, and reset the BootROM if such a button is existing. This is a very uncommon problem nowadays as most computers are using flash memory instead of NVRAM.
A faulty RAM is easy to test by simply removing it, then the post will run and complain about missing/faulty ram, usually a long beep and a couple of short beeps. You'll have to rtfm to find out exact beep sequence.
In the other two cases I hope you still have guarantee on the laptop.
|
|
|
|
|
Fortunately, it's not my laptop.
Unfortunately, I'm probably the one who broke it while trying to fix a different problem. (Keyboard screwed up.)
Fortunately, the owner isn't pissed at me.
And it will be walking out the door in a few, so I'm not going to worry about it. Too much.
And I will never offer to help someone with their computer again.
Thanks for the hints.
|
|
|
|
|
Did you try to disconnect the keyboard and then start the computer?
|
|
|
|
|
Yes. No diff.
And she just drove away with it...
She says "We always have it plugged in on the desk, we never take it anywhere."
So I say "Why don't you get a desktop?"
|
|
|
|
|
Most likely something wrong with the motherboard... on laptops its usually not worth trying to replace... Recommend they get something else then move any data over for them. Its easy to mount a drive on a machine even if its bootable, you can do it either after you boot or just make sure you have the proper boot order.
|
|
|
|
|
It seems like my hard disk is running too slow. Perhaps it started after Windows 7 SP1 was installed a couple months or so ago. I don't know. I just know it is painful to boot. Painful to copy huge directories around. Painful to load large apps and projects and such. Any one else have this problem?
I have Windows 7 64-bit, Quad-Core Xeon, 6GB RAM, 2 WD SATA II 500 GB hard disks, etc. I ran WINSAT (right click CMD then run as Administrator then the following from the Vista/W7 command prompt):
winsat disk -read -ran -ransize 4096 -drive c
winsat disk -write -ran -ransize 4096 -drive c
winsat disk -read -ran -ransize 524288 -drive c
winsat disk -write -ran -ransize 524288 -drive c
And got this:
D:\>winsat disk -read -ran -ransize 4096 -drive c
Windows System Assessment Tool
> Running: Feature Enumeration ''
> Run Time 00:00:00.00
> Running: Storage Assessment '-read -ran -ransize 4096 -drive c'
> Run Time 00:00:10.59
> Disk Random 4.0 Read 0.39 MB/s
> Total Run Time 00:00:11.31
D:\>winsat disk -write -ran -ransize 4096 -drive c
Windows System Assessment Tool
> Running: Feature Enumeration ''
> Run Time 00:00:00.00
> Running: Storage Assessment '-write -ran -ransize 4096 -drive c'
> Run Time 00:00:03.39
> Disk Random 4.0 Write 1.44 MB/s
> Total Run Time 00:00:04.06
D:\>winsat disk -read -ran -ransize 524288 -drive c
Windows System Assessment Tool
> Running: Feature Enumeration ''
> Run Time 00:00:00.00
> Running: Storage Assessment '-read -ran -ransize 524288 -drive c'
> Run Time 00:00:26.38
> Disk Random 512.0 Read 20.25 MB/s
> Total Run Time 00:00:27.50
D:\>winsat disk -write -ran -ransize 524288 -drive c
Windows System Assessment Tool
> Running: Feature Enumeration ''
> Run Time 00:00:00.00
> Running: Storage Assessment '-write -ran -ransize 524288 -drive c'
> Run Time 00:00:15.24
> Disk Random 512.0 Write 43.47 MB/s
> Total Run Time 00:00:16.11
Any idea if that seems normal? Is your system faster? Thanks!
|
|
|
|
|
Here are my results (Seagate Barracuda 7200.11 1TB SATAII)
Windows System Assessment Tool
> Running: Feature Enumeration ''
> Run Time 00:00:00.00
> Running: Storage Assessment '-read -ran -ransize 4096 -drive c'
> Run Time 00:00:12.51
> Disk Random 4.0 Read 0.34 MB/s
> Total Run Time 00:00:13.65
Windows System Assessment Tool
> Running: Feature Enumeration ''
> Run Time 00:00:00.00
> Running: Storage Assessment '-write -ran -ransize 4096 -drive c'
> Run Time 00:00:02.86
> Disk Random 4.0 Write 2.00 MB/s
> Total Run Time 00:00:03.88
Windows System Assessment Tool
> Running: Feature Enumeration ''
> Run Time 00:00:00.00
> Running: Storage Assessment '-read -ran -ransize 524288 -drive c'
> Run Time 00:00:19.05
> Disk Random 512.0 Read 27.93 MB/s
> Total Run Time 00:00:20.12
Windows System Assessment Tool
> Running: Feature Enumeration ''
> Run Time 00:00:00.00
> Running: Storage Assessment '-write -ran -ransize 524288 -drive c'
> Run Time 00:00:08.03
> Disk Random 512.0 Write 74.69 MB/s
> Total Run Time 00:00:09.08
You should check the disk is not heavily fragmented first, get a disk degraf that gives you a visual idea about the frag spread (Auslogics is free, i use it)
Heavy fragmentation can display as the symptoms you are seeing.
|
|
|
|
|
Thanks. That's pretty much the same as my tests. I thought it would be a lot faster.
I'm running Diskeeper, so there's essentially zero fragmentation on the drive.
I don't understand why it is so slow on smaller sizes. Larger reads and writes are a lot faster (over 100MB/s when you get big enough).
But as a developer, we have source trees and such with TONS of small files in them. So copying a folder from one drive to another, and such, takes FOREVER. Just empting the recycling bin can be a painful process if you throw about 10,000 small files in it.
What happens is that the CPU (8 cores) ends up sitting around with not much to do all the time, because it is constantly waiting on the hard disk.
It just seems like there's something horribly wrong with that. And there's nothing that can be done about it. I wonder if a RAID array would be faster? Because right now, my girlfriends el cheapo Vista box gets better results than my super expensive tricked out dev box.
|
|
|
|
|
chimera967 wrote: we have source trees and such with TONS of small files in them
There you have the reason.
chimera967 wrote: I wonder if a RAID array would be faster?
Hint, SSD.
|
|
|
|
|
|
Quite right. Can't do that on my laptop though.
|
|
|
|
|
Okay, thanks. Do you know if people actually use SSD for dev boxes?
|
|
|
|
|
I do.
It's the biggest single hardware improvement I've done since getting a harddrive as such (instead of a floppy).
|
|
|
|
|
Cool.
|
|
|
|
|
You will usually always see a significant lower transfer rate for smaller file sizes.
Look at the graph at the bottom of my article for my NAS box, you will see a much lower transfer rate for small block sizes compared to larger block sizes.
QNAP NAS Memory Upgrade, Hardware Change and Performance Benefits[^]
Also in the article, I have a included some benchmark software I use, and an excel spreadsheet template i use for tracking benchmarks between mods etc.
|
|
|
|
|
(hope this forum is right then
Hi all,
I'm about to begin a small project in which I must be able to store and lookup up to 20. mio. files - in the best possible way. Needless to say fast.
For this I have been around -
http://en.wikipedia.org/wiki/NTFS#Limitations
http://www.ntfs.com/ntfs_vs_fat.htm
And now my question: Dealing with a production load in the area around 60.000 files (=pictures) per day each around 300 kb in size, what would be the best ratio of what number of files in what number of directorys to make the search-time best? Obviously I do not put all the files in one directory, but in a number of dir's. So what would be the best economy for such a thing?
Seems to be hard to find information about on the web.
Thanx' in advance,
Kind regards,
Michael Pauli
|
|
|
|
|
Hi,
1.
I tend to limit the number of files per folder to 50 ot 100. In my experience it is not very relevant if you never need to browse the folder with say Windows Explorer, so when your app knows which file to access, it does not matter. If you can group the files logically (say by topic), then by all means do so. OTOH if you have to open the folder in Explorer, especially on a remote computer, things may slow down considerably when the folder holds hundreds of files/folders or more. If so, use a two-stage or three-stage organization; with maximum N files/folder, that can hold N*N or N*N*N files.
2.
Search what? file content? file names? partial file names? If file names, then again, organize a multi-level folder hierarchy based on what matters most to you (could be the first and second character of the file names).
3.
Whatever is is you really need, just give it a try. In a matter of minutes a test app could create and store a huge number of files (real or dummy), and you could experiment with the result.
PS: I'm sure all this is in the wrong forum, it isn't hardware related, is it?
|
|
|
|
|
Seriously, use a database instead. You're losing very little storage space and winning so much on the lookup. Atleast if it's properly indexed.
|
|
|
|
|
Hi Jörgen,
I totally agree about your comment, but my customer want to use a filesystem and not a Oracle db etc. I really don't understand why, but it is something about maintenance and backup I'm told.
Kind regards,
Michael Pauli
|
|
|
|
|
Yeah, that's utter bullshit.
Your customer is going to find that that method will be non-performant and limited as well as very easy to screw up while doing "maintenance".
The more files and directories you shove into the directory structure, the slower a single search is going to get. Indexing won't help much as the indexes will be limited to the properties of the files themselves as well as the metadata stored in the image files.
The more files and directories you add is going to make the NTFS data structures grow and grow, eventually taking up gigabytes of space, slowing your machines boot time, and if something should happen to those tables, God help you when performing a CHKDSK on it. Bring a cot to sleep on.
The backup argument is also garbage as it's just easy to backup a database as it is to backup the massive pile of debris you're about to litter the drive with.
|
|
|
|