|
It slipped into my mind that for the dining philosophers, the fork is a resource
|
|
|
|
|
If you mean what I think you mean, 'fork' isn't really the right term. Not sure what they call it TBH, but it's basically two separate filesystems made to look like one, but on a per-directory basis, if that makes any sense.
So, for example, /Applications isn't just a single folder. It's two, and one of those is on a read-only file system. And that's why they've done it - it's a security feature. The read-only file system is immutable and contains all the built-in apps (for better or worse). Apps you install yourself go to the mutable one.
It's also digitally signed, so if anyone does manage to break in then your system won't boot. Hell yeah, secure or what? I mean, what then? A trip to dear old Apple I guess, bring your card.
Paul Sanders.
If I had more time, I would have written a shorter letter - Blaise Pascal.
Some of my best work is in the undo buffer.
|
|
|
|
|
The call them forks. The resource fork is the split off FS. I don't remember what they call the main fork.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Do they? OK, it's just that, being a grumpy old git, I remember the days when a (resource) fork was something else entirely. Guess we're running out of terms, gotta recycle some of the old ones. I have to say, it does make sense.
Anyway, did I answer your question?
Paul Sanders.
If I had more time, I would have written a shorter letter - Blaise Pascal.
Some of my best work is in the undo buffer.
|
|
|
|
|
It's entirely possible I'm remembering incorrectly, but I could swear that's what it's called.
Well, as far as your answer, you didn't tell me much I didn't already know. The issue is that that resource data doesn't always get copied properly.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
OK. FWIW, (and for anyone else who might be interested) there's a better explanation of this read only filesystem business here:
https://superuser.com/a/1495146/546261
The issue is that that resource data doesn't always get copied properly.
Don't understand that (but would like to help). Do you mean [the contents of] an app's Resources folder? And from where to where (in terms of volumes)?
If you're trying to stuff things into /Applications , that's what the Installer app is for. That's what I do (I build an installer package, using pkgbuild and then productbuild ) and everything works fine.
Or am I trying to teach my grandmother to suck eggs?
Paul Sanders.
If I had more time, I would have written a shorter letter - Blaise Pascal.
Some of my best work is in the undo buffer.
|
|
|
|
|
[Edit: I replied to the wrong post]
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Going to start off with one correction: Modern applications are not typically split across multiple forks and haven’t been for literally decades. I’m genuinely surprised you’re running into such applications at all in 2023. I’m also curious what tool you were using to manipulate the files that was breaking them, because Macs have always just transparently handled them.
The original Mac file system supported two separate chunks of data per file system entry. There was a data fork which was an unstructured store mostly analogous to the single data stream most early file systems had, and a resource fork that held discrete chunks of data tagged with a type and an integer identifier. The resource fork served the dual purpose of simplifying the use of structured data and providing a means to help mitigate the extremely constrained systems of the day. Applications, for example, kept executable code in the resource fork in multiple chunks that could be loaded and unloaded at need (transparently to the coder) to fit within available memory, not unlike overlay files on MS-DOS.
But Mac applications haven’t typically been structured that was since the release of Mac OS X. So again, the fact that you’re running into them in 2023 is somewhat baffling.
|
|
|
|
|
Okay, so when I said it doesn't get copied properly I was taking a liberty to avoid a longer explanation.
Basically, the copy thing is something I have historically run into before, but it has been years, as you said.
But, I recently ran into someone with an issue with a resource fork not being read properly or otherwise being screwed up trying to run a python dev env using VS code on a mac. Nobody could help them.
I hate the whole concept.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
The thing to keep in mind is that the fork-based file system and the resource fork specifically were novel and very effective solutions to a couple of very real issues *in the early 1980s*. They should be viewed as an artifact of their time and not through the lens of the present. As others have noted in this thread, of course, fork-supporting file systems are now ubiquitous but forks are not typically used to hold critical information these days. Just lose-able metadata.
|
|
|
|
|
Or, you might say that for application areas where they make sense they have been further developed into rather complex, application specific container formats.
A container file is a multi-fork file where each fork serves a specific purpose - just like the old MacOS files, or the ADS files. We could have seen software develop to accept forked files as something that everyone had to be able to handle, as an extension of hierarchical file systems that every one must handle today. That didn't happen, so container formats grew up to serve that function.
I grew up as an IT guy in a world where hierarchical file systems were not the standard. They were starting to arrive, but the Univac OS-1100 file system did not have a hierarchy. Sintran had a disk-user-file hierarchy, but each user had a flat file space. When DEC introduced their "All-in-1" office automation system (frequently referred to as "All-in-several"; the integration was far from perfect), documents were stored in a fixed hierarchy of cabinets, drawers and folders: Their users indicated a clear preference for a fixed system.
Lots of young people today find it hard to understand how it could be possible to organize information in a system without infinite length directory chains. Container formats curb this. If file forks had become common, we certainly would have seen another crop of *nix file systems offering file forks where each fork could be another forked structure, recursively to unlimited depth. We are probably better off with application specific container formats that keep the recursion under control, even though it means that each application area has its own container format.
|
|
|
|
|
Their entire schpiel all along has been to tuck all those details away, and I think, to intentionally introduce burden to their usage.
This way they can both claim it is "open" while being totally unusable in the context of consumers reaping benefit from an "open" platform.
Oh I bet they discuss it quite a bit. It's just their objectives are strictly aligned with enriching Apple versus doing anything that may otherwise make a lot of sense.
|
|
|
|
|
OTOH, Microsoft (or any other company in the business) was founded for the sole purpose of benefitting mankind, and never for financial profit.
|
|
|
|
|
Yes but I don't care if you were raised to be a mass murdering dictator so long as you a benevolent king.
Some part of my beef with Apple is exactly that they made MSFT chase them.
I absolutely despise them though and truly wish the company never existed.
I don't even care that the world wouldn't be able to see/know how right I am about the fact we'd be 50 years further along tech wise or more if they had not.
If we want to put it to metaphor it's like young black men worshipping gangster rap dreaming of becoming kingpin drug dealers.
|
|
|
|
|
Wordle 910 3/6
⬛⬛⬛⬛🟩
⬛⬛🟩⬛🟩
🟩🟩🟩🟩🟩
|
|
|
|
|
⬜⬜⬜⬜🟩
⬜⬜🟩⬜⬜
⬜⬜🟩🟨🟩
🟩🟩🟩⬜🟩
🟩🟩🟩🟩🟩
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
|
|
|
|
|
Wordle 910 5/6
⬜⬜⬜🟨⬜
⬜🟨⬜🟨⬜
⬜⬜🟩⬜🟩
🟨🟩🟩⬜🟩
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 910 4/6
⬛⬛🟩⬛🟩
⬛🟩🟩⬛🟩
🟨🟩🟩⬛🟩
🟩🟩🟩🟩🟩
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
Wordle 910 3/6*
⬛⬛🟩⬛🟩
⬛⬛🟩⬛🟩
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 910 4/6
⬜⬜⬜⬜⬜
⬜⬜🟨⬜🟨
⬜🟩⬜⬜🟩
🟩🟩🟩🟩🟩
|
|
|
|
|
|
We are watching the world change. 40 years ago, the PC was super duper, now we see stuff like this.
To be honest, I am impressed by these Reuter articles - it's real research. Note the side links.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
Good article. I read a similar article about this subject a few days ago.
If "Q-Day" arrives in 2025, I feel things will get messy.
At the same time, I think we may find methods using quantum computing to develop other
technologies that will take the place of current encryption methods.
I see parallels in AI. While AI will imperil humanity, it will also serve the purpose
of resolving these existential threats. The only practical way we can counter the dangers
of AI is by using AI. Perhaps it may be the same way with quantum computing.
Think of past events that are similar to what we face today. For example, consider Enigma.
In WWII, Germans believed their Enigma code could not be broken. Yet, we all know of Alan Turing. The
machine he created successfully broke the Enigma code.
Have you heard of the Navajo Code Talkers? They were U.S. Marines in WWII who used their native language to encode data. The enemies were never able to break the Navajo code. It was a low-tech solution
to a high-tech approach to encoding information. Perhaps the solution in the case of quantum computing
may be much the same.
I'm confident that the United States is, and will continue to be the leader in quantum computing.
We have Alphabet, IBM, Microsoft, and many others who are advancing the technology of
quantum computing. No one else is even remotely close to these companies and the U.S.
While China is a significant competitor, it will never be the leader in such technology, IMO.
I believe this because of my experiences with "technology" from China is that it's crap.
China's "technology" is largely stolen U.S. technology that's poorly replicated trash.
IMO, our best speculations on how things will unfold with this new technology can be ascertained
by taking a look back at history.
|
|
|
|
|
i am surprised i did not post re/ the amazing technology of motion magnification cameras prior . it can even be utilized to hear what former President Trump is saying to his many lawyers as long as a bag of Potato Chips is nearby . though i suppose Krunchy Cheetos will also do in a pinch though if i am nearby neither would be .
as an audiophile i am waiting for an audio reviewer to utilize same to observe the vibrations of speaker cabinets during play .
Video Magnification[^]
Home - RDI Technologies[^]
|
|
|
|
|
Message Closed
modified 16-Dec-23 10:15am.
|
|
|
|