|
Daniel Pfeffer wrote: There is no technical reason why
There is a commercial reason though because when they tried that long ago with Pascal it failed.
|
|
|
|
|
I'm aware of that. We were having a technical discussion, not discussing the commercial viability of such an implementation.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Daniel Pfeffer wrote: There is no technical reason why one could not build hardware which has the Java bytecode as its machine language. Ditto for C#. If you with C# mean to refer to dotNET Intermediate Language (IL), you are comparing two languages at approximately the same abstraction level, but very different in form.
Java bytecode, like lots of other P-code formats (strongly inspired by the P4 code of the original ETH Pascal compiler) are intended to be complete, ready for execution, with no loose ends (except for those defined by the language to be, e.g. late binding), similar to 'real' binary machine code - but for a virtual machine. The instructions (i.e. bytecodes) are executed one by one, independent of each other.
IL, on the other hand, has a lot of loose ends that must be tied up before execution. It contains lots of metadata that are not the machine instructions, but indicates how instructions should be generated. Although you in principle could try to 'interpret' the IL, you would have to construct fairly large runtime data structures to know how to generate the interpretation, similar to those structures built by the jitter to compile the IL to machine code. So you are really doing the full compilation, except that you are sending binary instructions to the execution unit rather than to the executable image.
The line between compilaton (followed by execution) and interpretation is fuzzy.
If you with C# refer to direct source code interpretation, you have a huge task to solve. Here, you would have to build a lot more complex runtime data structures to support the interpreter. Building these would be going a long way to making a full parse tree of the source code, and then you have done a significant part of the compilation job.
Compilers are so fast nowadays that I see no practical advantages in interpreting program code.
For building dedicated hardware:
USCD Pascal, one of the better known Pascal interpreters for PCs, used the P4 bytecode. It also ran on the PDP-11, such as the single-chip LSI-11. For this was written microcode to run P4 directly (rather than PDP-11 instruction set). It turned out to be significantly slower than running the PDP-11 software interpreter.
There are lots of similar stories of hardware implementations not living up to expectations. Intel's object oriented CPU, the 432, was simulated on an 8086. The first 432 implementation turned out to be slower than the simulator.
Yet another example: I worked on a 'supermini' (i.e. VAX class) machine that provided instructions for trigonometric functions. The Fortran compiler/libraries didn't use them; they were too slow. Calculating the functions the traditional way turned out to be faster. I talked to the designers of the CPU, asking why the instructions couldn't do it the same way as the libraries. They just mumbled a lot about having to be prepared for interrupts in the middle of the instruction. But the library sequence of instructions can be interrupted midway, right? Well, that is a different situation ... In other words, I never got a decent answer.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
trønderen wrote: IL, on the other hand, has a lot of loose ends that must be tied up before execution. It contains lots of metadata that are not the machine instructions, but indicates how instructions should be generated. Although you in principle could try to 'interpret' the IL, you would have to construct fairly large runtime data structures to know how to generate the interpretation, similar to those structures built by the jitter to compile the IL to machine code. So you are really doing the full compilation, except that you are sending binary instructions to the execution unit rather than to the executable image.
I sit corrected.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
trønderen wrote: I worked on a 'supermini' (i.e. VAX class) ... asking why the instructions couldn't do it the same way
That is an interesting story.
|
|
|
|
|
Quote: Can the Java VM or C# runtime (and .net framework) be implemented those languages? I doubt it.
C# supports .NET native compilation (since VS 2015), so you can compile C# in Visual Studio to machine code. Someone more proficient in Java could answer whether Java can be compiled to machine language.
Also, C# is always run compiled to machine language. C# compiles to Microsoft Intermediate Language (MSIL), and the .NET VM then performs its "Just in Time", or JIT, to machine language when it is run, based on the specific environment in which it is run. The Java VM does something similar, so both do produce machine language differently and more efficiently than an interpreter would.
|
|
|
|
|
MSBassSinger wrote: so you can compile C# in Visual Studio to machine code
How does it link?
MSBassSinger wrote: Also, C# is always run compiled to machine language. C# compiles to Microsoft Intermediate Language (MSIL)
Java does the same.
That however is not really applicable to this discussion.
|
|
|
|
|
PIEBALDconsult wrote: be implemented those languages? ... look only at the core of the syntax,
Then I believe my answer would be yes.
The standard java compiler is written entirely in Java. It emits class files. Programmatically it could emit assembler.
Not sure how C# does it but pretty sure process is doable.
I don't know the details of JavaScript enough to know what is included. But I wouldn't be surprised if it was possible.
PIEBALDconsult wrote: though BASIC is the only one I can think of quickly which has had successful implementations of both types.
I believe I remember seeing a C interpreter long ago. Before a lot of the new stuff was added.
PIEBALDconsult wrote: so are they truly compiled or just interpreted?
Now days I suspect the distinction is meaningless. Certainly doesn't mean much to me because I know how compilers work and because they all emit something, the something is not as important in terms of a discussion like this.
Looking it up apparently Lisp in 1952 was an interpreter.
PIEBALDconsult wrote: though BASIC is the only one I
That might be a distinction, since often with BASIC, one didn't 'compile' it but rather just 'ran' it. It compiled and then ran all at one go. So there never was a binary type file. However I know that internally it was still distinct processes (compile, emitted code, then run the emitted code.)
Perl is the same.
I had the misfortune to work on a system which did NOT compile the language to any intermediate form. This was when I first learned how languages should work. In that system it would run lines by parsing them. So a for loop would reparse the for line every time the loop executed.
|
|
|
|
|
Turing completeness: Geek&Poke: Spoilsports[^]
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
I'd say that C++ is not a real programming language - it is a complex programming language.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Don't get me started on imaginary languages.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Isn't that one half of a complex language?
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
I'll double down on that.
|
|
|
|
|
I almost exclusively use C these days. A real programming language as well.
I am a old timer raised with Fortran, PL/I, Cobol, Algol, Basic, Pascal,..
C is down and dirty which keeps it elegantly efficient.
C++ is up and dirty. I get lost in the clouds of classes.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
If they added template to C I'd consider switching. But until then, it's C++ for me. I've been seduced by the power of the dark side.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Dangit, now I don't feel Turing Complete. I had to look up DFA vs NFA and still don't know what they mean. How have I been programming over 20 years and have no idea what they are. Please don't tell my boss!
Hogan
|
|
|
|
|
I actually wrote an article recently to help people understand them
FSM Explorer: Learn Regex Engines and Finite Automata[^]
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
What you describe is why I never bothered with JavaScript in years past. C#, as with many other languages, is an object-oriented, general purpose, fully developed language. JavaScript was created as a quick fix, created in 1995 and called Mocha originally, to a browser problem back in the early days of browsers. Once it took a foothold with a lot of web developers early on, the resistance to change (or significant improvement) has been unbreakable, resulting in gobs and gobs of JavaScript (and all the hacks built around it) in a lot of web pages.
If you want to build web apps, try using open-source WebAssembly (Blazor in Microsoft's environment). Instead of JavaScript, you can use C#. When compiled, it is compiled to WebAssembly (efficiently so, so only compiled code that is necessary is sent to browser) and executed in the browser's WebAssembly engine (not the JavaScript engine). Other languages outside Microsoft also support WebAssembly, but the IDEs are not quite as robust - yet. And most of the companies that provide third-party UI components for other languages (including JavaScript) provide the same ones for WebAssembly. Just don't fall for the dying throes of JavaScript worshippers when they try to get you to include JavaScript in your WebAssembly web front end. The JavaScript Interop is slow, and JavaScript in a WebAssembly app is a waste of time.
JavaScript and TypeScript (which is just a JavaScript generator) will be around a long time simply because it has been so widely used in the past (when there was not a viable alternative). There are still lots of COBOL, FORTRAN, and VB6 programs around today, just being maintained because the business side of whomever owns the code does not want to pay the conversion costs to a modern language.
|
|
|
|
|
I'm not looking to develop web pages.
I just was looking to teach myself typescript since JS is being used outside of the browser now.
I was not pleased with what I found.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Quote: I'm not looking to develop web pages.
I see. I misinterpreted what you meant overall. My fault.
Quote: I just was looking to teach myself typescript since JS is being used outside of the browser now
I would be curious to better understand why you would want to use a slow scripting language on the server when there are a number of better, compiled, languages available. I do not doubt you have a good reason, but in my experience, those using JavaScript on the server is usually because they are front end developers tasked with writing something for the backend, and JavaScript is what they are proficient in. I realize that may not be the only reason, but I would find it interesting to better understand why you are trying to increase proficiency at writing server code in JavaScript.
Thanks
|
|
|
|
|
I already know C#. I already know C++. I already know C. I have no intention of learning Java.
I learned Typescript for the same reason I'll probably end up learning Python even though I hate it.
Because it's used everywhere.
Node.js IS the backend these days. For at least half the major paying projects I've seen.
Like it or not, it's what's for dinner, and the less I know about that stuff, the further behind I get from where the rest of the world is.
Even if Typescript gets retired right now it's relevant. Extremely relevant, because people are producing code in it.
I don't have to like it to want to understand what the hell is going on with the state of the world in development these days.
I intend to age out gracefully when I do, not get pushed out because I don't understand the way programming is done anymore.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Quote: Node.js IS the backend these days. For at least half the major paying projects I've seen.
I have yet to see node.js used to any significance on the projects I have worked on, but my projects tend to be larger projects with a web front end and an API-based backend, or just a service-only API-based backend project, some on-premise, and lately, mostly Azure-hosted or Azure-native. Some of the older projects being updated have a little node.js in them, but it goes away in the updated version. Of course, since I loathe JavaScript, I would tend to gravitate to projects not requiring it, which makes my experience more subjective than objective.
Yet, statistically, node.js shows high on the server app language utilization, but these lists I see are not differentiated between overall app size and complexity.
Now that .NET 6.0+ (current is .NET 8) is out, stable, full-featured, and can run compiled on multiple OSs, it is a safe candidate for server apps and cloud apps that was not true just a year or so ago.
|
|
|
|
|
I already know .NET
I don't need to expend effort learning it. I was on the Visual Studio development team at Microsoft back when they rolled out C#. I've used it ever since.
I'm covering my bases, making sure I have a broad understanding of relevant technologies used in software these days.
However you feel about it, node.js is part of that milieu today. Your opinions or mine about how solid it is are not relevant to the fact that it is being used.
That can be bitter pill to swallow, but remaining relevant and not holy rolling yourself into a corner sort of requires it.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Quote: I'm covering my bases, making sure I have a broad understanding of relevant technologies used in software these days
That is as good of a reason as any and makes good sense. I haven't had to learn JavaScript (to the depth of using it like you do when needed). So far, I have found plenty of work where C# (server app development, cloud development, web development with Blazor) is required, combined with experience in Azure native development. Converting JavaScript to C# (coding and architecture) is about as far as I have gone.
If getting and keeping a job required me to learn JavaScript and using node.js, then I would do so. I have been fortunate so far. I know what the relevant technologies are, but I have been fortunate to be able to pick and choose which ones I work in. I applaud your flexibility with the projects you choose.
|
|
|
|
|
I probably won't end up taking a job doing web development as a primary thing, but I could see being drafted to develop a companion app to some embedded widget using Flutter or something.
This not only keeps me in the loop, but it keeps me from getting rusty in general.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|