|
if we are using same js file from different asp.net pages.what are different ways to persist javascript global variables through different asp.net pages.
|
|
|
|
|
Could you pass them through the URL of the pages?
--Perspx
"When programming in Visual Basic, you can always know whether a given program will become stuck in a loop and never halt. The answer is 'yes'." - Uncyclopedia
|
|
|
|
|
The entire JS environment goes away (well, mostly) when a new page is loaded. So you must manually save the values somewhere and restore them. You could use cookies to store a limited amount of information client-side, you could post it back to the server (possibly using AJAX if the user would otherwise not be posting anything) and store it somewhere there and then send it back down encoded in any new page requests, or as Perspx suggests you could encode it and modify all URLs on the first page to contain the encoded information, then parse it out again when the destination loads. There are a few other techniques for local storage - you can find a short discussion of them here: http://www.niallkennedy.com/blog/2007/01/ajax-performance-local-storage.html[^]
Generally speaking though, "persisting global variables" is the wrong way to think about it. When writing JS, you should always consider the environment as transient, limited to the current page. Anything that needs to persist should be saved explicitly in your code, and the technique for doing so should be a fundamental part of your design since it will affect the user experience. Generally, you should try to minimize the amount of persistance necessary - the more pages stand alone, the easier they are to test and debug, and the better they'll work with users' bookmarks, navigation, etc.
Citizen 20.1.01 'The question is,' said Humpty Dumpty, 'which is to be master - that's all.'
|
|
|
|
|
But my problem is that neither can i use cookies nor can i post my data in ecrypted form to server through ajax call and need to be persisted in js variable through three different ajax pages.Can you suggest any other trick?
|
|
|
|
|
I suggested two, echoed Perspx for my third suggestion, and linked to several other techniques. I also put in some effort to give you an understanding of why what you seem to seek is fundamentally flawed. What more do you want?
rajivkalra1982 wrote: Can you suggest any other trick?
Sure. Give up on making web pages and use Flash or Silverlight. That's a trick, a dirty one, and your users will suffer for your trickery.
Or, read what i wrote, explain what you're actually trying to do, and ask an honest question. Taking some time to understand the systems you hope to build on, rather than making wildly inaccurate assumptions, wouldn't hurt either.
Citizen 20.1.01 'The question is,' said Humpty Dumpty, 'which is to be master - that's all.'
|
|
|
|
|
|
It's just that your editor is using a unicode BOM character to specify the file's encoding. It's perfectly valid, but older software/browsers that don't properly work with unicode may not like it. UTF-8 encoding doesn't need the byte order thing really as it's part of its specification that it's fixed order. Unicode can be an awkward beast to master, but tools are getting better.
If you're getting problems with it, try re-editing the file in another text editor to see if it trims out the code.
More details: http://en.wikipedia.org/wiki/Byte_Order_Mark[^]
By the look of that it seems that there's some interaction with PHP that you may need to be aware of.
|
|
|
|
|
Thank you.
I used smarty template engine.
|
|
|
|
|
In all honesty I wouldn't worry too much about it.
Test the site in a good variety of browsers that your users might be using (especially old browsers that might not work well with unicode).
You're using valid markup, so in my book, you win If old browsers don't work then they lose (but it still may be worth fixing if your user base requires it)
|
|
|
|
|
Thank you stevio.
You're nice.
|
|
|
|
|
Don't delete your post after someone has replied. It junks up the forum with [Message Deleted]
"The clue train passed his station without stopping." - John Simmons / outlaw programmer
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
I've just been looking at this as a way of reducing load from resources such as JS and images on some sites. It looks like a quick and easy way of offloading static content to a distribution network, which has to be a good thing.
What worries me about it, is there doesn't seem to be (or at least, I can't find) much discussion about the security implications of including HTML and client script code from unknown third party servers.
Is there some kind of security model or checksum system to prevent rogue proxy operators from injecting arbitrary code which could harm site users? Eeek...
|
|
|
|
|
Can't you select a CDN that you know and trust?
|
|
|
|
|
Yes, if you want to pay for it One of the benefits of Coral is that it appears to be free. It's also apparently distributed, and it seems that the organisers don't necessarily have control over the behaviour of the individual proxies, and a user (end user or web host) can't specify which to use.
Now obviously free is no good if it means abandoning pretty much all security and control over your website, and handing it over to unknown third parties, which is why I'm trying to figure out if this has been considered and mitigated (via some kind of central control / checksums etc).
With my current understanding, an end user gets content from CDN nodes which are closer to them where possible - which would make it even harder to track down if a rogue node had done something bad to some of your users - you wouldn't know which one, and as a site operator, you probably wouldn't see any evidence that it had happened.
If a node started scanning content for keywords, then serving up competitors or inappropriate sites or content that would be bad. If the node started injecting bad client script for attacks on users that would be terrible - and hard to detect.
Now presumably this is either somehow mitigated, or an accepted risk of such an open and free system - what worries me is that I haven't seen much discussion about it - are people unknowingly at risk, or have I got the wrong end of the stick?
|
|
|
|
|
Ah if you're talking free...
This is what the Coral Wiki has to say about it:
Ultimately, we certainly want and hope that many third parties run Coral nodes, so that Coral can grow into a world-wide network of thousands of computers. For now, although the source is available via anonymous CVS, we'd prefer to run a network of several hundred machines on PlanetLab that are under our control, to enable easier maintenance, debugging, and pushing our regular changes, bug-fixes, and new functionality. However, feel free to use Coral regularly! In fact, we welcome your help and feedback as users.
Furthermore, there are more serious security issues we will have to handle once Coral is run on untrusted clients (one can think of the current deployment as "trusted", similar to commercial CDNs.) Until better security protections are in place, we want to retain control over Coral nodes.
|
|
|
|
|
Ahh Well sppotted - thanks. From reading the other sections, it seemed that it was already running on untrusted clients.
From the tone of that, it seems that they are at least going to consider the security aspects before doing that.
|
|
|
|
|
i got sql injection attach on my website plz help me how can i prevent this
thanks in advance
help as a alias.
Be happy and make others happy.Cheer up...........
|
|
|
|
|
Use parameterised queries or preferably stored procedures for all database access. Properly written this should prevent sql injections.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
on login boxes it would be safer to user the following way to secure parameters
========================
SQLConnection conn = new SQLConnection("connstring");
SQLCommand comm = conn.CreateCommand();
comm.CommandText = "SELECT * FROM Admins WHERE (uname = @uname) AND (password = @passw);";
comm.Parameters.AddWithValue("@uname",TextBox1.Text);
comm.Parameters.AddWithValue("@passw",TextBox1.Text);
conn.Open();
SQLDataReader reader = comm.ExecuteReader();
if(reader.Read())
{
//loggedIn
}else{
//invalid
}
========================
What uyou can also do is convert your text input to lowercase check your text input for SQL expressions such as " * , select, delete, update.
string s = "Input string got from box";
if(s.Contains("select") || s.Contains("*")){
//Possible injection attack
}
Le Roux Viljoen
Web Developer
PCW New Media
South African Branch
www.pcwnewmedia.com
|
|
|
|
|
we already try this
plz give me something more
help as a alias.
Be happy and make others happy.Cheer up...........
|
|
|
|
|
Well if you validated input strings coming from text boxes, there wont be a problem, what you can also do is turn debug off, and make sure that remote debugging is also turned off in your web.config server becuase sql injection is usually not possible without knowledge of the database table names and structure. The oke probably where typing in random query text in these boxes to cause this error like " asas); Delete FROM abc; Select * from abc where(uname = " to expose error messages from the server. becuase that would expose the details of the queried table?
What is the specific case where they broke in ? was it on a login, or when adding comments ?
Also assuming you are working from .net, I recommend that you use the built in login validation and role management that comes with visual studio (.Net 2 and up). It implements the best standards and practises for securing login.
Le Roux Viljoen
Web Developer
PCW New Media
South African Branch
www.pcwnewmedia.com
|
|
|
|
|
It's going to be hard to fix your problems without seeing your source code.
Also, site security is a complex topic, and it's difficult to cover everything necessary on a forum. If you're working on anything that's security critical, and you don't know how to make it basically secure, the best solution would be to hire someone who does (and maybe get them to teach you about what they're doing)
(no offence intended - it's a serious point)
|
|
|
|
|
Oh, and I should just mention - whatever you do - don't post a link to your site after telling the world that it's insecure - it's not likely to help your situation in the short term.
|
|
|
|
|
Good suggestions from the other two replies. Just to expand on the points a bit -
Essentially it comes down to not trusting any input from your users. Any time you're using any user supplied data, you have to make absolutely sure that you don't allow it to be treated it as executable code, or else you are allowing untrusted users to modify the intended behaviour of your application.
This includes when you use user data as part of an SQL query string.
(language dependent, treat this as pseudocode)
For example if you are using an SQL query like
SELECT * FROM Users WHERE UserName='$UserName' AND Password='$Password'
You need to ensure that the $UserName and $Password inputs can not contain any SQL code (or client/server code, but that's another story)
If the $UserName field contains something like "Alan';DELETE FROM Users;INSERT INTO USERS (UserName, Password) VALUES ('Hacker', 'MyEasyPassword');"
Then the rogue user would have been able to delete all your users, and insert a new user with the details he wants. This is far from a worst case scenario, which could potentially include having all your sensitive data stolen, existing data modified in ways that you won't notice, and a system put in place to do further malicious activities - all without you even knowing.
There are several approaches to fix this - mostly previously suggested.
1) Thoroughly validate all your input data. Use a whitelist where possible to specify valid input types, and trim out any code that is in the SQL. This includes server script, SQL, and client script code. This approach can be difficult to get right if you use it on its own.
2) As suggested, use parameterised queries.These are safer, as the DB server should not parse the parameter values for code to execute, however you should still be validating your code (or else you're going to be back here asking about cross site scripting attacks, junk data etc)
3) Use stored procedures. These are different to, but have some of the benefits of parametrised queries, though a lot of crap is talked about them... You still need to validate the data.
Hope that helps.
|
|
|
|
|
help as an alias wrote: how can i prevent this
Read Colin's article on this site about such subject. Very useful.
"The clue train passed his station without stopping." - John Simmons / outlaw programmer
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|