|
It looks very interesting. Cryptography has long been an interest of mine. The second program I ever wrote was handling a simple substitution cipher in BASIC on an HP 3000 using a Teletype. That was in 1974.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
If you've written an algorithm like that, you'll definitely enjoy the book.
About 14 years ago I wrote little encryptor which used XOR to "encrypt" and "decrypt" user data which was then saved as Hex bytes in a file. It was a silly thing just to make sure the user didn't screw up the data. But it also taught me a lot about how difficult it is to create true encryption and how easy it is to decrypt data.
Also, as you read the entire history of encryption in this book you will discover that basically everything has been hacked by applying frequency analysis and the point there is : Randomize your data!
Easier said than done.
|
|
|
|
|
It sounds very interesting.
Several years ago I revisited that simple cypher and adapted it to encrypt our application's user permission file and it worked pretty well.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
A couple of threads earlier I asked the question: Should one defrag a SSD or not. I got different answers so I tried an experiment:
I am very familiar with Macrium Reflect to create images of my C: drive, which is a NVMe M.2 SSD. Reflect works extremely fast and will create an image as fast as the C: can feed it data.
So I created a system image and noted the speed at which Reflect was writing it to the target. It reached a maximum speed of 6.7 GB/s. Then I ran "defrag C:" from a command prompt and got a report that the C: drive was 20% fragmented before it was successfully defragged.
Then I ran Reflect again and this time it reached a maximum speed of 7.8 GB/s!
It seems to me the speed at which SSD can read large volumes of data is affected by fragmentation.
Note: I ran the trim command on the same drive yesterday and it seems this did not remedy the fragmentation.
Thanks to all those who expressed an opinion on SSD fragmentation, but I will be running it from time to time. If that shortens the life of the SSD, well, they are cheap and easy to replace!
Note: Windows reported as follows after defragging the C: drive:
Pre-Optimization Report:
Volume Information:
Volume size = 930.65 GB
Free space = 868.64 GB
Total fragmented space = 20%
Largest free space size = 863.72 GB
Note: File fragments larger than 64MB are not included in the fragmentation statistics.
The operation completed successfully.
Post Defragmentation Report:
Volume Information:
Volume size = 930.65 GB
Free space = 868.64 GB
Total fragmented space = 0%
Largest free space size = 863.75 GB
Note: File fragments larger than 64MB are not included in the fragmentation statistics.
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
The only reason we used to use the defrag tool was to watch the animation of the blocks being moved around.
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
And the soothing grinding noise of the HDD
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
The shortest horror story: On Error Resume Next
|
|
|
|
|
Did you run the tests more than once before and after the defrag?
|
|
|
|
|
A valid question! In response to your question I just now ran the "after" test again and got the same result. The "before" test I ran many times over the weeks and never got the speed that I am getting now.
Also: I did a clean install on the machine 3 days ago, and this may explain the 20% fragmentation.
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
Cp-Coder wrote: It seems to me the speed at which SSD can read large volumes of data is affected by fragmentation. People that pretend to be experts claim that since there are no mechanical parts in an SSD that fragmentation is no longer an issue clearly do not understand how software for a file system can also be a bottleneck. But the fact is, nobody verifies anything and just repeats crap.
That being said, it's much less of an issue these days. Back in the day if you had a fragmented filesystem... you would know. SSDs work substantially quicker than mechanical drives. In theory the speed of electricity. It's still a tradeoff between defraging and a shorter lifespan of the drive though. I mean they're much cheaper now and last a long time, still worth knowing about the tradeoff.
Just a tip to keep the FS from fragmenting. If you have a bunch of files that you move around a lot you can always dump them in a zip file that's lightly compressed. It'll spare your real file system. Granted, probably better to do this for tiny text files that aren't source controlled, so maybe it's not practical.
Jeremy Falcon
|
|
|
|
|
So I have this Ferrari Daytona SP3... and I was wondering if I could use jet fuel instead of gasoline. I asked online and most people suggested that jet fuel was probably a bad idea since it would likely shorten the cars life. I decided to test it at the track and discovered that using gasoline got me from 0-100mph in about 5.8s and topped out at about 210mph. However, if I used jet fuel it got me from 0-100mph in 4.7s and topped out at 242mph.
Awesome! I'm sticking with jet fuel!! Never mind that I live in the city and my average car trip is less than 3 miles (round trip).
Bottom line: Not sure that using Macrium Reflect is the best judge of your real world system performance. Just saying...
|
|
|
|
|
Since I use Macrium Reflect almost on a daily basis, it is a valid metric for me. I will continue doing what is best for me, and you can do whatever works for you!
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
Cp-Coder wrote: I will continue doing what is best for me, and you can do whatever works for you! Of course... BTW - I was joking with my "analogy"; hence the "joke" icon of my post.
Out of curiosity, what role / function does this PC perform that necessitates such heavy use of Reflect?
|
|
|
|
|
I keep my data on a separate drive which I backup separately, so the C: drive only has Windows and the applications. So my Macrium images take less than a minute to create. Since it hardly takes any time, I take an image every morning first thing and I can restore my machine to a previous state if I pick up anything nasty or unwelcome.
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
Cp-Coder wrote: So my Macrium images take less than a minute to create
So if it slowed down by say 10% then that would be too slow?
|
|
|
|
|
Ultimately that's all that matters, doesn't it.
If you have a measurable difference in performance, stick with your current method.
|
|
|
|
|
fgs1963 wrote: Awesome! I'm sticking with jet fuel!!
Next test is with the lawn mower?
|
|
|
|
|
I would imagine there's some overhead in processing a ton of file pointers to determine where the next chunk of a badly fragmented file is, as opposed to having a file stored in one continuous chain. Would that account for the difference? I have no idea.
Still. I don't know about Macrium's internals, but in theory, if a backup program worked by copying entire disks/partitions, as opposed to reading file systems, then it wouldn't matter how fragmented (or not) a disk is, or whether the software even needs to understand what file system is being used.
Of course that means backing up a 1TB drive that's only 10% full will back up 1TB and not 100GB. I have a 2-disk USB enclosure that's like this. There's a button on the front that, if held when powering up, will blindly clone one drive to the other, regardless of file system (assuming the target is the same or larger capacity). And if the source drive has tons of fragmentation, the individual cloned files will be as badly fragmented.
|
|
|
|
|
No. I have set up Macrium so that it only includes actual files in the image. So my images reflect the size of the used parts of the disk, not the entire disk. This works very well and I have restored my C: from such images dozens of times. Macrium also includes all partitions on the systems drive by default. It is really a fantastic utility for restoring your machine in case of some disaster.
Ok, I have had my coffee, so you can all come out now!
modified 6-Dec-23 11:56am.
|
|
|
|
|
dandy72 wrote: I would imagine there's some overhead in processing a ton of file pointers to determine where the next chunk of a badly fragmented file is, as opposed to having a file stored in one continuous chain. Would that account for the difference? I have no idea.
The actual read operation(s) might not take much longer, but the host turnaround time will certainly affect things.
Optimized disk:
single read operation of N blocks
Fragmented disk:
read operation of N1 blocks
(host turnaround time 1)
read operation of N2 blocks
(host turnaround time 2)
...
where N1 + N2 + ... = N
Note that if the hardware implements read gather/write scatter operations and the O/S supports them, this may mitigate much of the host overhead due to fragmentation. I know that eMMC has such operations, and I assume that most other current protocols do, too.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
No "pointer space" reclaimed apparently. Some sort of reorg based on size and or frequency of use?
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
A long chunk of contiguous data will make the best use of the onboard precaching, allowing the disk to reach the reported speeds (e.g. 5500 mb/s reading) and it MAY be a significant improvement for the loading times of large games or large video files.
It will shorten the ssd lifespan especially on cheap ones (many will use MLC or TLC to increase the available space at teh cost of longevity), so if you really have this necessity of working on large video files or regularly play huge games with long loading times even on ssd (bad game design) I'd consider buying a high quality lower capacity one for these kind of works.
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
The shortest horror story: On Error Resume Next
|
|
|
|
|
Hello all,
INTRODUCTION
Given the new accounting laws in our country, every company will need an accounting software that sticks to those laws.
Even freelancers (as me) will have to adopt a software like that.
I don't trust the cloud + I don't want to pay a fee every month to be able to use my accounting data.
Now I own a NAS which is more than enough for my needs but is not capable to run the accounting programs I will be able to use in my country.
Most of the accounting programs I could use require SSD and Windows to run.
Getting a server would mean:
* Getting a server, some SSD and HDD disks.
* Getting an UPS.
* Getting a small rack.
* Getting a Windows server license.
* Using our current NAS as backup for that server and keep doing our NAS extra backups with external USB HDDs.
QUESTION
What server / option would you recommend for this kind of job?
Would it be better to get a tower server? or a rack server?
As soon as we have children the server, NAS, UPS... will have to be placed inside a rack anyway.
It would be nice to be able to have a mix of SSD and normal HDD, SSD for the OS and the accounting program and HDD to store everything else.
+/- 8TB of data space available would be nice.
+/- 32GB RAM available would be nice.
Would it be better to install the accountant program inside a virtual machine? just to make it easier to move it from one server to another one in the future (if needed).
Do you agree that it's better to get a server than a normal workstation for all this?
And as a bonus... what would you use that server for apart of all mentioned before? Any additional hint/idea?
Thank you all!
|
|
|
|
|
Are there hardware requirements for the software ?
Are you the sole user of the data ? or your clients need access to the data ?
Whatever your do, make sure your backup work; plan regular tests of your backups.
I would use the server for a single purpose
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
Maximilien wrote: I would use the server for a single purpose That's a good idea. If he feels the need to overpay and get a beefy computer, then he can at least use something like VMWare Sever to split it up.
Jeremy Falcon
|
|
|
|
|
Yes, the requirements on their web site are:
Intel Xeon (*1) + 8 Gb RAM + 100 Gb SSD or SAS.
Given their crazy requirements I thought of getting something that could run that software and some of the things I have now in my NAS at the same time and make it a little future proof...
|
|
|
|