
Would you believe that I used conversion type functions?





I would believe anything, but what does that have to do with producing an efficient and valid encryption algorithm?





Nothing, in and of itself. What makes my encryption algorithm unique is that unlike other encryption algorithms, the set of all unencrypted values, encrypted values, and key values, are electronically indistinguishable from one another, while the each unencrypted value its corresponding encrypted value are still uniquely different from one another.
modified 5Apr14 10:21am.





So you keep telling us, but that proves nothing. Your algorithm needs to be tested to destruction by experts in the field, before anyone is going to pay you any money for it.





That point I will concede. Nonetheless, I was planning on licensing it out, rather than selling the patent wholesale.





Daniel Mullarkey wrote: I was planning on licensing it out Well you still have the same problem. Who do you think is going to want to licence an encryption system without any evidence of its efficacy?





It's fine to use int as a shortcode for 'group of 4 bytes' imo, that's what int means in pretty much every modern environment.





I understand that, I'm not sure that OP does.





I'd have to agree with the conclusions here and on Google[^]  the description is not enough to evaluate the value of your algorithm, and proprietary closedsource encryption algorithms are never a good idea. Since there are perfectly valid free and opensource algorithms available which have undergone years of intensive analysis, the chances of getting anyone to switch to your new, unproven algorithm are practically zero, even if you gave it away.
Daniel Mullarkey wrote: The algorithm uses carefully calculated mathematics to give the appearance of gibberish until it is decoded with the proper key or combination of keys.
As Arne Vajhøj said in your Google thread[^], that's practically the definition of encryption.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
 Homer





Most or perhaps all of your description is exactly what all encryption algorithms not only do but must do to be encryption algorithms in the first place.
Other than that if you cannot patent your encryption algorithm then it is worth nothing. Or at least worth nothing more than whatever snake oil your sales people can get from it.
http://www.networkworld.com/columnists/2001/0827schwartau.html[^]
If it can be patented then do so with a small market release and then use sales from that to patent/protect it in other places. Or just sell it to someone who already does it.





All it can get you is street cred; you won't make any money from it directly, but it could get you your foot in the door at some large corporation.
You'll never get very far if all you do is follow instructions.





A closed source encryption algorithm is worth pretty much nothing, because there's no way to evaluate how strong it is. So there's no way we can answer this question without seeing the code, which obviously you can't share because it would invalidate a patent application if it were in the public domain.
An encryption algorithm with no linkage between blocks is relatively weak, because an attacker can take blocks in isolation and generate parts of the key, particularly if he knows what some of the content is. If your key is long enough then it becomes a onetime pad which is unbreakable, but you've just deferred the problem to how to exchange keys securely.
If your algorithm is weaker than DES then I doubt it's worth anything. However, having your name on a patent could give you some industry kudos and make it easier to get a good job or consultancy work.





In short, it is designed to compete with PGP. Even the keys themselves can be encrypted with my algorithm, by using the same key block or another key, thereby making it more difficult to intercept a key.





I wonder what optimization algorithm should be used for such a problem:
x: independent variable
y: dependent variable
For y = f(x), find all x such that:
sum of y is > p where p is a positive integer
sum of y is the minimum of all sums that are greater than p





There is important information missing:
1. what types are x and y?
2. what are valid ranges for x and y?
3. what do you mean by "sum of y"? For any given x, there is only one y  what is it that you sum up?
4. what are the properties of f()? E. g. is it monotonous, continuous, continuously differentiable? Can you even give an exact definition? Is it even a function? You do not state as much!





Myself I suspect that the most important piece of information missing is which school class this is for.





@jschelll  sorry to disappoint. Its been many years since I left college which is probably why I'm finding it hard to write a generalized statement for the problem I'm facing!!!





Let me clarify my question:
x: is a discrete random variable
y: is a discrete dependent variable that follows count data model
y is a function of x.
Both x, y are both finite positive integers
Question is:
For a given p: find all x (i.e., x1, x2, ... , xi) such that
y1 + y2 + ... + yi > p
and
(y1 + y2 + ... + yi) is the smallest sum of all combinations of y's > p





This looks a lot like a variation on the knapsack problem[^]
That could suggest a strategy.





Let me clarify my response: Your question doesn't make sense, not in the original posting nor in this 'clarification: you are seeking the combinatorial optimum soultion in an infinite search space, and that is impossible to solve. You must restrict that search space to a finite set of (x,y) pairs, before the question even starts to make sense!





I have been interested in Decision Tables [^] a long time, and lately I found myself wanting to do some exploratory programming to create a Decision Table UI, and parse it, and crank out a bunch of complex logical assertions in a form that would be useful for creating a set of "business rules," statemachines, languageparsers, etc.
I suspect some of you "oldtimers" here have experience with decision table software, and theory.
The most interesting aspect of decisiontable theory, to me, is the analysis of a set of complex logical statements to see if there are ambiguities, contradictions, or "incompleteness."
I don't quite know how to conceptualize the dynamics of the process of logical verification; I'm posting this message to just ask for some pointers to any resources you are aware of for theory or algorithms useful for this.
I'm not looking to find code, at this point. I have been searching on the web, and have been examining the various commercial software packages for Windows for decision tables (pricey !). Nothing yet has really given me any ideas on the theory/algorithms for logical "provability."
thanks, Bill
“But I don't want to go among mad people,” Alice remarked.
“Oh, you can't help that,” said the Cat: “we're all mad here. I'm mad. You're mad.”
“How do you know I'm mad?” said Alice.
“You must be," said the Cat, or you wouldn't have come here.” Lewis Carroll





I have no idea (always a good start for an answer), but you may be interested in BDDs[^] as well (because they are often a good way to store and manipulate boolean functions).





Many thanks, Harold, that's an excellent resource.
After posting this message, memory came back to me of studying Truth Tables, around 1982, when I was bored out of my mind in the first year of a doctoral program in social science.
Another issue that interests me is a question of heuristic ordering of logical comparisons in order to optimize computation ... given you have a compiler, like C#, that suspends evaluation "smartly."
So, the question of "who's on first ?" is very interesting, in that context: I use the term "heuristic" since I think that the information a programmer has is, often, which one, or a few, tests are known to be very frequent, and which other tests are estimated to be less frequent.
For very complex sets of logical conditions, given frequency information on each possible condition's occurrence, it's interesting to consider what might be involved in producing code that's optimal: and what happens if, over time, those probabilities fluctuate in regular "clusters" (modes). So that, in State #1 you want one chunk of code, and in other States #n you want other chunks.
Do I make sense ? Wait, no, please ... forget I asked that
“But I don't want to go among mad people,” Alice remarked.
“Oh, you can't help that,” said the Cat: “we're all mad here. I'm mad. You're mad.”
“How do you know I'm mad?” said Alice.
“You must be," said the Cat, or you wouldn't have come here.” Lewis Carroll





What sort of aspects do you want to consider?
For example, if you assume uniform cost of all tests/branches, then the obvious order is in decreasing order of probability.
If the cost is slightly more realistic and based on the fraction of mispredictions (assuming the branch is random but biased), then .. I'm not sure. The total cost (ie average time spent in the tests) would be approximated as F(i) = C(i) + (1P(i)) * F(i + 1) and F(n) = 0 where P(i) is the probability of the ith branch being taken and C(i) = 0.55 e^{(8 (P(i)0.5)2)}  0.04 (that's not the best approximation of the cost of a branch ever, but it's the best I could come up with in a couple of minutes).





Recently while looking for Braille characters as images, I discovered a rather strange thing  there were many, many websites that taught Braille, but I could not find any that had all of its characters as separate image files, let alone in one place.
In order to fill this gap, I went ahead and created an open source project on SourceForge BrailleAlphabetGenerator hoping it would be useful for someone. The intention was to keep the image parameters customizable.
I am seeking feedback from all of you on how to improve the project  code, visual, or anything else (except perhaps, 'Why not use Unicode?').
Thanks much in advance for your feedback.




