|
GenJerDan wrote: Why do people use it?
1. Twitter posts. Gives you more room to fit text into your posts that contain URLs.
2.Click tracking for marketing purposes. Most of those URL shorteners have a backoffice that lets you see how many clicks you're getting.
On the other hand, you have different fingers. - Steven Wright
|
|
|
|
|
|
I guess you could say it caught a buffer Overrun issue ?
|
|
|
|
|
Can't you even obey simple instructions?
You are supposed to tell a programmer, not 12,821,673 >32768 of them!
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
|
|
|
|
|
|
As a programmer, I'm very intrigued. So you're saying that 103956 is more than 32768? Very interesting. Never knew that. Thanks.
|
|
|
|
|
Smart K8 wrote: Very interesting. Never knew that. It's a trending way to handle error messages. When your system crashes you popup random facts so that at least the user is gaining knowledge while using your app.
There are two kinds of people in the world: those who can extrapolate from incomplete data.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
I love this new trend. I'm amazed.
|
|
|
|
|
PIEBALDconsult wrote: Are they really using signed 16-bit addresses?
No. The index buffer defines the triangle faces of a 3D object. For some reason the vertex buffer (which is indexed by the index buffer) may contain no more than 32000 vertices. For most uses this may be enough. Rendering too many objects with 32000 vertices and a corresponding number of faces is a slow affair. On the other hand, this decreases the size of the buffers, so that you can load more 3D objects at the same time. Video memory has always been precious.
The language is JavaScript. that of Mordor, which I will not utter here
This is Javascript. If you put big wheels and a racing stripe on a golf cart, it's still a f***ing golf cart.
"I don't know, extraterrestrial?"
"You mean like from space?"
"No, from Canada."
If software development were a circus, we would all be the clowns.
modified 27-Mar-17 5:30am.
|
|
|
|
|
Vertex count is limited to 32 bits (per draw call / buffer). If you're using 16-bit index buffers then you can only reference up to vertex 65535, but you can still have up to 4294967295 indices in your buffer, though I've never actually tried.
If there's a 32768 limit on buffer sizes it's in their game code, it's nothing to do with the graphics card (except having enough video memory to store everything you need).
|
|
|
|
|
Anthony Mushrow wrote: If you're using 16-bit index buffers then you can only reference up to vertex 65535, but you can still have up to 4294967295 indices in your buffer, though I've never actually tried.
I know that, but the error message that was shown mentioned 32768 as max. index value, so we must assume that they used a 16 bit signed type in the index buffer. And in the end it's irrelevant how many vertices you have in the vertex buffer if you can't access them.
The language is JavaScript. that of Mordor, which I will not utter here
This is Javascript. If you put big wheels and a racing stripe on a golf cart, it's still a f***ing golf cart.
"I don't know, extraterrestrial?"
"You mean like from space?"
"No, from Canada."
If software development were a circus, we would all be the clowns.
|
|
|
|
|
The message actually says that there are too many indices for the index buffer, not that a specific index is too high.
I've never seen anybody use a signed type in an index buffer since it's just a waste, I'm not even sure if you can. You could use a signed type in your own code, but it'll be interpreted as unsigned on the GPU.
|
|
|
|
|
Not to mention awkward. Did you see the video I posted here[^] two days ago. I must have gotten something right.
The language is JavaScript. that of Mordor, which I will not utter here
This is Javascript. If you put big wheels and a racing stripe on a golf cart, it's still a f***ing golf cart.
"I don't know, extraterrestrial?"
"You mean like from space?"
"No, from Canada."
If software development were a circus, we would all be the clowns.
|
|
|
|
|
Did the steam engine just derail?
|
|
|
|
|
Pong?
Someone's therapist knows all about you!
|
|
|
|
|
Purpose of using indices is to save GPU memory. A vertex contains the very least x, y, z coordinates. It can also have additional info like RGBA color or other texture coordinates(u and v). If it just has (x, y, z), its size is 3 * sizeof(float). A vertex often appears more than once in nearby triangles. If possible, we want to represent this same vertex with an index number, instead of duplicating same information. If indices is 32 bit, we can end up using more memory than saving it.
|
|
|
|
|
|
That's some good engineering!
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
|
|
|
|
|
Let's reprogram that thing to evade the darts and let them hit only useless fields.
The language is JavaScript. that of Mordor, which I will not utter here
This is Javascript. If you put big wheels and a racing stripe on a golf cart, it's still a f***ing golf cart.
"I don't know, extraterrestrial?"
"You mean like from space?"
"No, from Canada."
If software development were a circus, we would all be the clowns.
|
|
|
|
|
Actually, they did that as well. Hit or miss depending on the dart.
|
|
|
|
|
CDP1802 wrote: let them hit only useless fields.
Or better, always hit the wires and "spang!" off into the crowd...
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
|
|
|
|
|
Maybe NASA could adapt the technology and move Mars when a lander screws up.
Marc
Latest Article - Merkle Trees
Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny
Artificial intelligence is the only remedy for natural stupidity. - CDP1802
|
|
|
|
|
Magic doesn't exist by definition, but there are also real reasons why compilers do not rise to meet the naive expectations that are nevertheless often repeated as a kind of software engineering meme. Some people think memes are true and maybe some wishful thinking plays a roll (I get it, it would be great if the compiler was magic), and will just think I'm talking sh*t when I give the short version, so here's a longer one.
There is way too much to cover, so for now I'll just concentrate on one thing, why are compilers not godly at code generation. They are pretty good nowadays, but their mythical status is undeserved.
A fairly fundamental problem (for both compilers and novice programmers, but programmers can learn) is that the cost model is wrong, so when it's tiling its internal representation with pieces of machine code, it's not modeling reality accurately enough.
As far as I know, every reasonable code generation technique, even advanced ones want the cost to be a scalar.
But it isn't a scalar, not in an accurate model of reality anyway, not since the end of what I'll call "simple architectures" (circularly defined as those architectures where the cost of an instruction is the number of cycles it takes and no other considerations exist).
What does reality look like? Let's look at the code below. It doesn't really matter what it actually does, I'm just going analyze the cost (for Haswell) to show a bit of what's involved.
.L3:
mov rcx, rdi
imul rcx, rax
imul rdx, rsi
add rcx, rdx
mul rsi
add rdx, rcx
shrd rax, rdx, 2
sar rdx, 2
add rsi, 1
mov rcx, rsi
adc rdi, 0
xor rcx, 10000000
or rcx, rdi
jnz .L3 This is a fairly interesting loop because it has a non-trivial loop carried dependency, which I have drawn here (two iterations shown, arrows are in the direction of the dependency, data flow is from bottom to top "against" the arrows). Just adding the latencies on the critical path (imul3, add3, add4, shrd2) or (mul2, add4, shrd2), either way gives 8, but it actually costs 9 cycles per iteration: imul3 and mul2 cannot be executed in the same cycle (both need p1), one of them has to wait a cycle and either way that holds everything up by a cycle.
There is a bunch of other code in this loop, but it "fits in the gaps". In general, you do have to care about the other code, especially in typical throughput-limited loops.
This is tricky enough by hand, now imagine implementing that in a compiler. What does the model even look like? Certainly not like a "just combine everything into one scalar"-cost that you can simply add, that's not even close. A vector "pressure per port" seems like an obvious model for loops with only a trivial loop-carried dependency, but even that is really tricky: many instructions can dynamically go to a port with a low pressure (eg p0156 means it can go to ports 0, 1, 5 or 6), modeling that as 1/4 pressure to each port only works if there are no instructions that must go to a certain port, but there usually are. You could distribute those instructions across the ports like a CPU would, but only if you know the context, so now you have a cost that does not just depend on the tile that you're looking at but also all other tiles (which you may not even have chosen yet!).
Reality is a mess, and compilers just don't model it (though they could). Not out of laziness, implementing a realistic model means you can't use the old DP tiling algorithm (which for DAGs isn't optimal anyway, but with some tweaks you can get close) because you don't have optimal substructure: the cost of a sub-tiling depends on the context in which it appears, the best tiling may not consist of locally-optimal sub-tilings.
To give a fairly abstract example of that, suppose you have parts A and B, part A can be tiled either such that it has 2 µops going to port 1, or such that it has 1 µops to port 0 and 3 to port 5. Which is better? It depends: if part B needs to send 2 µops to port 1 then combining it with the "locally better" first option gives port 1 a pressure of 4, which (if there is no other context and we're talking about throughput) is worse than combining it with the second option where the worst port pressure would be 3. With more context, the decision can flip again.
Often heard: "Compilers know better what is fast is what isn't than you do."
Well you can fix that, start here.
An other big problem is that a lot is set in stone before code generation. Whether a certain optimization should be applied depends on how it actually works out during code generation, but compilers are too linear for that - they optimize their IR, then do code generation. If they make a choice that works out badly, too bad.
Ideally (from a quality perspective) any choice should have its consequences computed by running all possible versions all the way through code generation. Choosing based on anything else is essentially a guess, though there are "obvious cases". But it would be way too slow, since many of the decisions stack to an exponential number of versions that would be tried. Not all decisions affect each other of course, but it's bad enough.
It is really the opposite of the workflow of a human, if I may be so bold as to speak for an entire species, we're all about trying different approaches and seeing what works out.
The higher level problems are even worse, maybe more on that some other time..
|
|
|
|
|
I'd agree - compilers are pretty good, but they still don't come close to an experienced human who knows what he is doing with a specific machine code / assembler.
Part of that is that the language being compiled enforces specific structure on the program being written, which may not be an ideal match for the task being coded: an example I had was where I needed to output 128 bit data serially with a clock bit - The compiler generated code was slow as heck because it just didn't know what exactly I was trying to do, and there was no way to tell it. In assembler, it was two machine instructions per bit and an order of magnitude faster (and the clock was symmetric as well, unlike the compiler version).
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
|
|
|
|
|
In the beginning there were only ones and zeroes, then came Assembly language.
Unfortunately, people did not understand what they saw.
An angry mob came forth carrying pitchforks and shouted:
"Aye, dark wizards!
Keep yer magic tomes of olde to yerself!
We dun want yer magic here, lest ye curse us all to heck!
Now let us call upon the Witchfinder General, that she may release us from evil!"
And thus came forward Grace Hopper, who wrote the first compiler, hiding the runes of the computer which people did not understand.
And people could use higher level languages and they did not look back.
Yet, compilers have since been known to contain dark magic, but a necessary evil and those who dare open up these Pandora boxes are known as dark wizards.
|
|
|
|
|