|
Speech to text programs are pretty good these days, but code is not English. You would need a special language module for each computer language.
For example, how would you enter a variable 'SumOfSquares'? Should it be one word or three? Is 'x equals 5' 'x = 5', or 'x == 5'? Other examples are easy to find.
While I can see the utility of such a program for people who have lost the use of their arms/fingers, I have my doubts whether there are enough programmers in that state to make development commercially viable.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
I saw the first doctor speech-to-text program many years ago. The system recognized medical terms only, not general chitchat. So it was quite reliable, within its domain.
Code also has a limited vocabulary, and a strict grammar. Assuming that the program knows the syntax, and maintains a parse tree and a current position within the parse tree. If a spoken word may have 2+ interpretations, chances are that some of the alternates will give a parse error, so they are not likely to be correct. In most cases, there will be one parseable interpretation.
Your examples:
If there is a declared variable or method named 'SumOfSquares', and it is syntactially legal at the current position, then it is in word. If you are in the middle of a literal string constant, it is more likely to be in three words (with no camel casing).
If your have just opened an 'if' or 'while' condition, then it goes as x==5. If you have just completed the previous statement, and an assignment to x is a legal next statement, then it goes as x = 5.
I am sure you could find examples where two entirely different interpretations of the speech would both be syntactically legal. But for the very most code, that is not the case.
Side remark:
I have a hobby of giving hell to speech synthesis - from text to speech. Even though it turns the problem upside down, there is a lot of common handling. I collect all sorts of words of differing meanings and pronunciations, but written identically. (First time I read "Lead guitar: ..." on a vinyl cover, I thought it was a joke on the bass guitar. Heavy!) I have gathered a handful of sentences which have two very different meanings, both grammatically correct. For 99% of the words, if you analyze the sentence, syntactically and semantically, only one interpretation and pronunciation gives a meaning. (But most speech generators do not sufficiently deep analysis to do it correctly.)
Unfortunately, for this forum: My 'homograph' collection is in Norwegian, so the examples I could present would make no sense to the very most of you.
|
|
|
|
|
trønderen wrote: Code also has a limited vocabulary, and a strict grammar. Assuming that the program knows the syntax, and maintains a parse tree and a current position within the parse tree. If a spoken word may have 2+ interpretations, chances are that some of the alternates will give a parse error, so they are not likely to be correct. In most cases, there will be one parsable interpretation.
Which is what I said - one would need to build an appropriate parser for the language. I never said it was impossible.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Heh, I wire wrapped a board for my computer (s100 bus) back in the 80s to interface with the Votrax speech synthesis chip. Designing and wire wrapping the board was the easy part! Writing a simple program to make the computer 'talk' wasn't too hard, it was words like 'read' and 'lead' that caused problems. I didn't really have the chops to programmatically determine the sentence context so I ended up have a list of words that had special code that attempted to determine the correct pronunciation. Eventually, the program got to big for the amount of memory I had at that time (16K). It was definitely a fun home project.
|
|
|
|
|
You are the first person I have talked to that has used (and even built a board for) an actual S100 machine! I guess that 3 out of 4 CP members do not know what it is!
BYTE magazine had a number of articles in those days, DYI speech synthesis and, what the original post was about, speech recognition. There were several articles about a speech recognition board that could be trained to understand 64 words. As far as I remember of what the authors told, it would be reasonable reliable only with the voice of the person who had trained it, and the 64 words should be be as acoustically different as possible. Alexa is somewhat more sophisticated
When I read about people who worked with S100 machines, I'm itching go to down in my basement to pick up those BYTE magazines from the late 1970s and early 1980s to let my mind wander back to the days when you could understand every single bit in a computer. About 15 years ago, I went into embedded programming on 8051 chips; that was sort a return to the old days. When we picked up the ARM M0 (with our own monitor), I still had the feeling of being in control, but when we progressed to M4 and an external OS (Zephyr), and further on to M33, again something was slipping out of my hands...
|
|
|
|
|
I built a fair number of boards for my S100 bus system. Besides the Votrax board I built a 4K RAM board, a dual port serial board, a cassette tape storage interface board and a Selectric mechanizm control board (no dot matrix for me!!). My whole career was embedded programming (retired in 2019). I just loved it. I really loved the control.
|
|
|
|
|
I'd generate pseudo-code: set x to 5.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
I didn't say it's impossible. I said that code cannot be treated as a dialect of English.
I think that the bigger obstacle for the development of coding text to speech is economic. I doubt that there are enough coders who need or would want such a system.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
At my first job, they actually had a blind "intern" who programmed in Braile on his special typewriter. I don't remember how we got his program onto "cards", but I was asked to review his code. I can't help but think that some "braile to speech" would have helped his comprehension. (My issue is "slow" talkers). "Too much work" depends on the recipient.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Daniel Pfeffer wrote: I doubt that there are enough coders who need or would want such a system.
You are really under estimating what people get up to. Github has 370+ million repositories and 28 million public ones. They don't create them based on need but rather want.
If you google for the following you will find at least some solutions.
speech to code
|
|
|
|
|
jschell wrote: They don't create them based on need but rather want.
True. But commercial (as opposed to freeware/shareware) packages require maintenance, support, etc., which IMO would be uneconomical for such a niche product.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Aside: here is a Speech Recognition joke of the previous millennium -
A smart programmer went to a college classroom and proudly claimed that "My speech recognition software is so advanced that it can run voice commands on my DOS machine; you are free to test it now", and ran it. Immediately, a smarter student from the last bench shouted - "FORMAT C COLON ENTER".
It is left to your imagination about what happened next.
modified 19-Nov-23 20:27pm.
|
|
|
|
|
somewhat off topic but perhaps amusing though you may have heard this story previously . re/ language translation early technology English to Russian and back again "The Spirit is willing but the flesh is weak ." -> Russian -> English "The vodka is strong but the meat is rancid ."
|
|
|
|
|
"Out of sight, out of mind" -> Russian -> English "Invisible insanity".
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Yes. A Professor in the School of Automation at the Indian Institute of Science, Bengaluru, by name Prof. M R Chidambara had told this, sometime in the 1980's. I've heard it directly from him.
|
|
|
|
|
There used to be "services" on the net where you could set up a list of languages, such as English -> Russian -> Greek -> German -> English, and give it a text that would be passed through the specified series of translations. At least one of the services could even iterate the sequence until the result was stable (or in some cases, oscillated between two alternatives).
I saved printouts of a few such iterations in my scrapbook, but didn't save the URL. It most likely would be dead today anyway.
Does anyone know of any such service in existence today?
|
|
|
|
|
For quite a few years (it has been fixed now), Google Translate claimed that Norwegian 'postoppkrav' (charge on delivery) would translate to Swedish 'TORSK' (codfish).
Regardless of source and target language, everything was first translated to English, and then from English to the target language. So the Norwegian word for charge on delivery became COD, an COD in Swedish, maintaining the capitalization, is TORSK. Both steps make perfect sense.
Google also could translate English number to French: Forty - quarante, fortyone - quarante-et-un, fortytwo - 42, fortythree - 43 ... It was a mystery to me why it stopped at 41, and not at some "round" number. Maybe it was because "42" has an iconic value.
But we are sidetracking from the subject "Speech to text".
|
|
|
|
|
The next thing was another person jumping up, yelling "Yes" to answer the question "Are you sure?"
This was regularly claimed to be a "true" story from Microsoft's first demonstration of their text recognition. Lots of people did believe that the story was true. In Norwegian, we have a way of speech that goes "Well, if it ain't true, it sure is a good lie!"
|
|
|
|
|
Amarnath S wrote: A smart programmer went to a college classroom and proudly claimed that "My speech recognition software is so advanced that it can run voice commands on my DOS machine; you are free to test it now", and ran it. Immediately, a smarter student from the last bench shouted - "FORMAT C COLON ENTER". That says nothing of the quality of the medium in which someone delivered that command. It could've been done with a keyboard just as easy, rendering the intended point moot. I thought this dude was supposed to be smart in the example?
Jeremy Falcon
|
|
|
|
|
Obligatory xkcd : xkcd: Listening
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
|
|
|
|
|
xkcd: Listening[^]
I have done something similar to this at one place I knew they had Alexa. It didn't work (I suppose I used the wrong formulation or Amazon changed the way to do it), but the owner got ing frightened and almost bans me from the house. The other guests were rofling for half an hour.
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
If you're determined to not use your hands, you can always have macros/code snippets to handle the parts that a program wouldn't get correct and have voice dictation run those. Then you can use the normal functionality for the parts that will. Technology is a long way away from making this a worthwhile pursuit though. You'd be better off having ChatGPT code your crap and you using text to speech to give it prompts.
Jeremy Falcon
modified 20-Nov-23 10:56am.
|
|
|
|
|
|
Given enough time, anything is possible. Your code or someone else's?
I use a dictionary to validate every word in my text-to-speech program.
I use "markup" to indicate words that need to be spoken via phonetics.
RecognizedWordUnit.Pronunciation Property (System.Speech.Recognition) | Microsoft Learn
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Gerry Schmitz wrote: I use a dictionary to validate every word in my text-to-speech program. A least that can give a recognition quality comparable to word-by-word translation from one language to another, with no concern about context or grammar.
(I suspect that you intended to write "... in my speech-to-text program". If you really meant text-to-speech, that is a different, although related, problem. Are you then referring to a pronunciation dictionary? How do you handle homographs?)
|
|
|
|
|