|
OK, this place has a lot of good articles to read, and there was one that wanted to make a comment on, so I sign in, which then triggers an E-mail sent to me that has a link that logs me in. So I go back to the original article that I had wanted to comment on, but that page has me as not being logged in! Even if I copy & paste the URL from the original tab to the tab that got opened with the E-mail link, it still comes up as me not being logged in, and when I log in from there, to paraphrase Lou Costello, we're back again at getting an E-mail link (i.e., 1st base).
|
|
|
|
|
What did you select as your cookie settings? If it's "everything off" then it can't store your login status and that may be something to do with it.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
OriginalGriff wrote: What did you select as your cookie settings? If it's "everything off" then it can't store your login status and that may be something to do with it.
The logins for all other webpages persist fine.
|
|
|
|
|
No, the site specific pop-up that lets you disable ad cookies for example. If you said "disable all" some sites take that as "no cookies at all" and forget logins as a result.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
re Medium: [^]
As a writer, I'd never use Medium because:Quote: Unless otherwise agreed in writing, by submitting, posting, or displaying content on or through the Services, you grant Medium a nonexclusive, royalty-free, worldwide, fully paid, and sublicensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your content and any name, username or likeness provided in connection with your content in all media formats and distribution methods now known or later developed on the Services.
Medium needs this license because you own your content and Medium therefore can’t display it across its various surfaces (i.e., mobile, web) without your permission. For a critique of using Medium to post your own content: [^]
As reader, I'm too cheap to pay for its premium content or full-access But, I do find certain free articles I really enjoy, like: [^]
As a non-subscriber, you should have access to a couple of non-premium articles a month.
I don't think the not-logged-in problem is cookie related. I'm not a member, and, I have Privacy Badger running on Chrome, and, it's actively blocking two Medium trackers, but, I have no problem logging in using the link they send me.
I assume you've reloaded the page with control-F5, and you are not trying to access premium content.
«One day it will have to be officially admitted that what we have christened reality is an even greater illusion than the world of dreams.» Salvador Dali
modified 17-Jul-21 4:43am.
|
|
|
|
|
Bug in my code: "Oh yeah, small oversight on my part, happens to the best of us!"
Bug in someone else's code: " ing piece of code! Dumb ing programmers are totally clueless and I hope they ing die in a fire!!! "
|
|
|
|
|
Don't hold back. tell us how you really feel!
"It's better out than in." - Mrs. Cosmopolite
|
|
|
|
|
I'm trying to do in-place decompression of zip files such that you don't need to actually extract the streams in order to get the decompressed contents.
Basically, you can open a zip and read it and it will decompress on demand while you do block reads off of a special implementation of a stream it gives you.
I'm not great at math, or doing compression and cryptography algorithms even if I vaguely understand the concepts, so it was lucky for me that I found some of the relevant code in the public domain.
However, it does callbacks such that you give it a function it uses to flush data as it writes.
I can't use callbacks because I'm presenting a stream interface. You need to be able to request a block to be decompressed, at which point one fragment of the decompression takes place.
So basically, I need to turn this into a coroutine.
It's a bit like turning a SAX style xml reader into an XmlReader style pull parser.
It's mind bending. One big issue is I don't know how big the minimum buffer for an huffman block is, and even when I do I have to sift through some incomprehensible code.
What I thought would take me part of a day may take me a few, but it's an adventure.
I keep telling myself that in the end this will be worth all the effort because it means I can create better e-pub readers on cheaper hardware, and possibly even browse the web from an ESP32 or soon, an ARM Cortex-M or other IoT gadget (the code should work, I'm just not clear on the memory requirements of my own code yet, but it's light).
Why zips? Zips are just part of the mess because EPUBs are renamed zip files that contain their html and image content. And I'll be using them as packages to deploy HTML based UIs probably as well.
The only reason any of this works is all of it is streamed on demand and progressively loaded, from the zips, to the images and html contained therein, to the truetype fonts used to render the text. It's all demand streamed so it never needs to be loaded all at once, keeping the memory requirements tiny
It's gonna be so darned cool. Imagine a (bare bones, think lynxish but with graphics) web browser on a $5-$10 SoC
Real programmers use butterflies
|
|
|
|
|
I don't know if it helps, but:
We use zip compression in the file format for a logging application. Our approach to this problem is to compress incoming data until the size of the currently compressed content reaches or exceeds a given size (in our case, 64K). At that point we write the compressed size to the file followed by the compressed data.
When reading we do the reverse. Read the size, use the size to read the compressed data, decompress. While we don't need to create fixed size buffers on reading, you could alter our approach. While you're compressing incoming blocks, accumulate both the compressed and the uncompressed totals. When the uncompressed total reaches your decompression buffer size, output the compressed size and the compressed block. That would guarantee a limit to the size of the buffer you would have to allow for on incoming data.
This approach of course requires that you control both the compression and decompression handling of the data. If you can only control one, then this wouldn't apply.
Software Zen: delete this;
|
|
|
|
|
That's essentially what I'm going to do, but in order to do that I either need to finish reverse engineering this code or write my own. I'd prefer to reverse engineer this stuff though since it's also used for PNG decompression elsewhere and I'd like to share bits to keep the code footprint down.
Real programmers use butterflies
|
|
|
|
|
Xiaomi overtakes Apple as number two smartphone vendor for first time - The Verge[^]
tl;dr?
Mobile phones sales wordwide:
1st (19%) Samsung
2nd (17%) Xiaomi
3rd (15%) Apple
It was Huawei above Apple until the US sanctions stopped them using Android ...
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Where's Motorola? Oh, it's Lenovo now? Where's Lenovo?
|
|
|
|
|
I'm sure this is purely market driven and not politically driven...
cheers
Chris Maunder
|
|
|
|
|
I wonder if a better metric might be net-profit per phone. For Apple's relatively closed eco-system, a longer range view pf profit might include sales of follow-ons that only are available from Apple, and, possible brain-washed brand loyalty to to its OS/UI, exclusive content only on its online store.
Even if a big pile of cash fell on me, I'd hesitate to buy a sexy new i-thing ... too lazy to learn the swarm little weirdnesses necessary to use its OS skillfully ...
«One day it will have to be officially admitted that what we have christened reality is an even greater illusion than the world of dreams.» Salvador Dali
|
|
|
|
|
Apple have peaked ..... now we need mangoes..
Caveat Emptor.
"Progress doesn't come from early risers – progress is made by lazy men looking for easier ways to do things." Lazarus Long
|
|
|
|
|
When my mother was in hospital, we found out that she'd never tried a mango, and wanted to know what it tasted like.
Have you ever tried to describe that, with out saying "it tastes like mango"?
(I bought one, and served it to her the following day)
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Just came across GitHub - aminya/typescript-optimization: Compares different for-loops in TypeScript/JavaScript[^] and thought it was interesting.
I still, after all these years, don't get why something like foreach or for...of should be any slower than for (...) . I understand the generalisations and checks that have to happen, but in a typesafe language surely, once the compiler's worked things out, it can all just boil down to the fastest option at runtime regardless of syntax?
cheers
Chris Maunder
|
|
|
|
|
Not really. (At least in C#.) Polymorphism means the actual type isn't known until runtime, so it can't decide which technique is best.
I say use for or while whenever you can and only use foreach as a last resort, when the others don't work. foreach has serious limitations, as do its proponents.
It's often a case of the developer having more information about what's "best" than the compiler does. Do the compiler and optimizer a favor and pre-optimize.
|
|
|
|
|
I think about what happens with the LINQ Count() extension: if, at runtime, it's known the collection has a Count property then it'll just use that; otherwise it will iterate and actually count the items.
That's the sort of thing I keep thinking should be happening with for loops.
But I'm fairly sure very, very smart people have spent way too much time thinking about this so there's clearly an obvious and difficult issue stopping them.
cheers
Chris Maunder
|
|
|
|
|
But I don't see how it can do that at compile time, it sounds like it has to use reflection to determine whether or not the provided class has a Count property.
That is, (in C#) if the compiler only knows that the class will be IEnumerable or IEnumerable<T> , then it can't know at compile time what will be provided.
Sure, if it knows that a List or Array will be provided, then it should be able to. But that means you know it is a List or Array , so use a better loop.
And don't use Linq (ptui).
|
|
|
|
|
PIEBALDconsult wrote: But I don't see how it can do that at compile time
There will be some cases when it can at compile time:
let arr: number= [ 1,2,3 ];
for (let x of arr) {
...
}
for this it's pretty clear cut. For other scenarios, some runtime pre-checks, then choose the fastest implementation that works.
Anyway, I'm way out of my depth on this stuff but it does seem odd we're still having the "never use foreach" directives.
cheers
Chris Maunder
|
|
|
|
|
PIEBALDconsult wrote: I say use for or while whenever you can and only use foreach as a last resort, when the others don't work. foreach has serious limitations, as do its proponents. Obligatory xkcd: Optimization[^]
My experience with foreach is from C# rather than TypeScript, but would think the same reasoning applies. For me the choice of foreach vs. for (...) is based on whether I care about the state of the iteration within the loop or not. If I don't, then foreach is preferable. If I do, then either for (...) or while (...) wins. The construct being iterated over plays a role as well. If it's a simple array or an array-like class, then for (...) gets a stronger weight in the voting. If the only way to iterate is through IEnumerable or something similar, then I'll almost always use foreach . The goal for the decision is to choose the simplest, most natural iteration method based on the needs at the time.
As far as performance goes, I would think it doesn't matter in most cases. If you find a case where the enumeration technique dramatically affects performance, I'd be inclined to rethink the algorithm.
Software Zen: delete this;
|
|
|
|
|
I have studied some affects of it here:
On why to use DataViews[^]
In a few simple cases I do use foreach , but when performance is critical I do not use foreach , I am more likely to use GetEnumerator() and use multi-threading to iterate the items.
|
|
|
|
|
PIEBALDconsult wrote: use multi-threading to iterate That is definitely one of the magic words for choosing something other than foreach .
Come to think of it, I wouldn't be surprised to see a multithreaded foreach at some point. They've added all manner of kitchen sinks to C# in recent years, what's one more?
Software Zen: delete this;
|
|
|
|
|
Sometimes heavy enumeration is just par for the course. When you're computing parsing tables for example, you have to do a lot of iteration over grammar constructs. It's unavoidable because it's baked into the math. If you found a way to do something like say, subset construction without doing as much iteration you could probably make a lot of money demonstrating that technique.
One reason I tend to avoid foreach in C# these days is I port a lot of code to C++14 sans STL for use on IoT devices so foreach isn't really available, and is harder to translate.
I know that's a special case, but it comes up a lot for me.
Real programmers use butterflies
|
|
|
|
|