|
Us also. We use Veracode for both SAST and SCA (which checks our 3rd party libraries). Seems to work fairly well.
|
|
|
|
|
I have seen some impressive findings from SAST. Example, some sensitive information will appear in a log from an exception thrown 14 call levels away. The log is very secure, so it was not a real issue, but it exhibited the kind of relentless analysis where programs surpass humans.
|
|
|
|
|
Thank you for sifting through the thousands of message and replying to this.
|
|
|
|
|
Wordle 1,067 4/6
⬜⬜⬜⬜⬜
⬜⬜🟨⬜🟨
🟩🟨🟩⬜⬜
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 1,067 3/6*
⬜⬜🟨⬜⬜
⬜🟨🟩🟨⬜
🟩🟩🟩🟩🟩
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Wordle 1,067 4/6
⬜⬜⬜⬜⬜
🟨⬜🟨🟨🟨
⬜🟩🟩🟩🟩
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 1,067 4/6*
⬜⬜⬜⬜⬜
⬜⬜🟨🟨🟨
⬜🟩🟩🟩🟩
🟩🟩🟩🟩🟩
Happiness will never come to those who fail to appreciate what they already have. -Anon
And those who were seen dancing were thought to be insane by those who could not hear the music. -Frederick Nietzsche
|
|
|
|
|
Wordle 1,067 5/6
⬜🟨🟨⬜⬜
⬜🟨⬜🟨🟨
🟨🟨⬜🟨⬜
⬜🟨⬜🟨⬜
🟩🟩🟩🟩🟩
Isn't this a proper noun?
|
|
|
|
|
|
Not a considered old persons game
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
|
|
|
|
|
⬜⬜⬜🟨⬜
⬜🟩🟨⬜⬜
⬜🟩🟩🟩🟩
🟩🟩🟩🟩🟩
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
|
|
|
|
|
Wordle 1,067 5/6
⬛⬛🟨🟨⬛
⬛🟨🟩⬛⬛
⬛🟩🟩⬛🟩
⬛🟩🟩🟩🟩
🟩🟩🟩🟩🟩
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
Wordle 1,067 X/6*
⬛⬛🟨🟨⬛
⬛⬛🟨🟨🟨
⬛🟨🟨🟨⬛
⬛🟩🟩🟩🟩
⬛🟩🟩🟩🟩
⬛🟩🟩🟩🟩
|
|
|
|
|
And I think I found my most ambitious idea yet.
Training models to make LLMs spit out code for input specs where the code loops hand written.
So like parser generators.
DAL generators
etc.
Different model for each. Each model comes in a nuget package along with a C# source generator that invokes it.
The only thing is it will require hosting your own LLM. I have two 4080s across two machines, so it's not a problem for me - part of why I bought them, but I wonder how practical it is in general.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
honey the codewitch wrote: I have two 4080s across two machines, so it's not a problem for me - part of why I bought them, but I wonder how practical it is in general.
While it might work, I suspect that at the current state of the art it would not be cost-effective. The costs of hardware, collection of training data, classification of the training data, etc. are likely to be more expensive than the time that you'd save on the coding.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
I mean that I intend to release nuget packages with pretrained models, integrated as C# Source Generators that prompt a local LLM, trained with a (relatively) small model to undertake a specific type of coding task, like generating a parser given a context free grammar.
I am not looking to make an all purpose code generator or anything like that.
My interest is in code synthesis by which I mean generating "hand written" code.
The differences between a generated parser and a hand rolled parser are far deeper than basic cosmetic. The details of how they work are different, even if the principles are the same. Mainly a generated left recursive parser with fixed lookahead will always greedy match. A left recursive descent parser such as hand rolling would produce can switch between lazy and greedy matching, leading to more efficient and often much smaller code.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
nuget packages - pretrained models - LLM - coding task - generating a parser - context free grammar ...
Perfect candidates to extend the Word List in the Makebullshit - Tech Bullshit Generator[^]
This wonderful site is not updated yet with new AI buzzwords. Maybe it's time to do this.
|
|
|
|
|
This seems like the worst idea ever. Not only do you have not insight into the training of the model in the nuget package, but you also need to capture the generated source to see what's being compiled into your code. Throw a build pipeline and obfuscation on top and you have the perfectly opaque platform for distributing just about any kind of malware.
|
|
|
|
|
I don't see how that wouldn't be true of any code generator that someone for some reason obfuscated the output of.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
The use of a pretrained model to create Code Generators is bad enough. It's not impossible to see what was created, but not directly easy, either. And how many people would bother to even try? For those who do care about what code generators are putting into their code any WHY, then being able to see the algorithm being injected via source code of the generator is helpful, but here all you have is a collection of Tensors that are impossible to reverse engineer. If stuff like this becomes common, we are doomed.
|
|
|
|
|
I'm wondering if you may have enough patience case waiting for results of a training task lasts longer then one or two days
|
|
|
|
|
I mean, stable-diffusion runs pretty quickly on my machine.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Define 'pretty quickliy'
|
|
|
|
|
Stable diffusion takes minutes at most even for largest renders it can do in 16GB on my card. Usually under a minute to render to my prompts.
Edited: That's on my laptop's "4090" which is actually a 4080 die. But it is not as fast as my desktop's 4080. I haven't run SD on my desktop yet.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Sure,
but we should not compare the time which a trained model needs to finish a given job with the time it needs to train a model (and then find/optimize the right parameters and run training again and again).
|
|
|
|