|
My current code
Try
objConn.ConnectionString = "Data Source=192.168.0.1;Initial Catalog=Website;User Id=sa; Password=password;"
objConn.Open()
Label1.Text = "OK"
Catch ex As Exception
Label1.Text = ex.Message
End Try
Should be OK. I've got only one instance of sql server running, so that shouldn't be the problem.
I tried creating a regular windows app. which runs under my local user, and still no result. At least this solved the ASPNET User Account issue. Even if I copy the connection string from the connection in my Server Explorer there's no result.. But i'm pretty sure my connectionstring is right...
I know using the 'sa' user isn't a good idea in production websites, but for testing issues, i know the user has enough access rights for sure, that's why using sa at this time.
I love it when a plan comes together
|
|
|
|
|
What the f*ck! Created a new login, with only db owner rights on that very database and guess what, it works! I think there are some remote access problems with the sa account.
However, I was testing an win app, copying and pasting the connectionstring exactly onto a webform generates the same error.
I love it when a plan comes together
|
|
|
|
|
Hello,
I'm developing an application in VC++ using MFC. I use MS Access as the back end. I use the following function for retrieving some data from the database. This function is called in a loop around 20 times (one query per table).
I'm observing a strange problem. The first time when the set of 20 queries are executed the total memory allocated within this function comes to around 50 MB (as shown by the Task Manager). Now, without closing the application, if I repeat the same set of queries (I can do it through a menu item) the memory allocation within the function is around 2 MB only. Now onwards, If I repeat the same set of queries any no: of times, each time the memory
allocation is only around 2 MB.
If I close the application and start it again, the same behaviour happens. That is, only the first time a huge amount of memory is consumed.
Please let me know if there is any problem with this code that causes this
strange behavior.
Thanks & Regards,
Nishanth.
int CMyDataBase::QueryRecordsFromDatabase(
CArray <CStringArray*, CStringArray*>& p_arRecordPtrs,
const CString& p_csQuery)
{
int nRetVal(0);
_RecordsetPtr pRecordSet;
try
{
pRecordSet.CreateInstance(__uuidof(Recordset));
pRecordSet->Open(_bstr_t(p_csQuery),
m_pDatabase.GetInterfacePtr(),
adOpenDynamic,adLockOptimistic, adCmdText);
_variant_t Index;
_variant_t Indexval;
Index.vt = VT_I2;
int nTotalFields = pRecordSet->GetFields()->Count;
while(!pRecordSet->adoEOF)
{
CStringArray* pRecord = new CStringArray;
pRecord->SetSize(nTotalFields);
for (int nEachField = 0 ;
nEachField < nTotalFields; ++nEachField)
{
Index.iVal = nEachField;
Indexval =
pRecordSet->Fields->GetItem(Index)->Value;
CString csValue = (char*)_bstr_t(Indexval);
pRecord->SetAt(nEachField, csValue);
}
p_arRecordPtrs.Add(pRecord);
try
{
pRecordSet->MoveNext();
}
catch(_com_error &e)
{
m_csError = (char*) e.Description();
}
}
pRecordSet->Close();
pRecordSet.Release();
nRetVal = 1;
}
catch (_com_error &e)
{
m_csError = (char*) e.Description();
}
catch (...)
{
m_csError = "Unknown exception";
}
return nRetVal;
}
|
|
|
|
|
You need to understand what the Task Manager columns measure, and how the system allocates memory. The 'Mem Usage' column indicates the process's working set size - the amount of the process's address space which is currently backed by physical memory. This value can change up or down depending on the demand from this or other processes, and isn't really a reflection of how much memory your process has actually allocated. This value includes pages that contain code and data from your process's executable and DLLs. It also includes pages that are shared with other processes, for example memory-mapped files. The sum of the Mem Usage column will normally exceed (Total Memory - Available - System Cache) because pages from memory-mapped files and DLLs are counted against every process that references them.
The system adjusts your process's working set based on memory demand and heuristics, trying to keep a pool of free memory to allow fast allocations without needing to scan all processes at the point memory is required, and to keep a reasonably-sized disk cache. If there's enough free memory, an increase of 50MB on this value is not important.
The 'VM Size' column shows the number of bytes allocated to your process (the same information is exposed through the Process: Private Bytes performance counter). However, this measures virtual memory allocations - pages of virtual address space allocated using VirtualAlloc . Because of how the heap works, this may not go down when you return memory to the heap (by calling free , using delete , etc).
The heap is a mechanism for allocating arbitrarily-sized blocks of memory. The heap has to get that memory from somewhere, and it does so by calling VirtualAlloc . Typically the initial heap allocation is 1MB; once this 1MB is exhausted, more virtual memory - typically not contiguous with earlier allocations - is allocated. In newer versions of Windows, the heap maintains free lists for blocks of memory of a given size; doing this allows the heap manager to allocate blocks smaller than that size quickly by taking the first block from the free list, rather than searching the heap for a block of the right size. When the block is freed, it is returned to the top of the free list, i.e. it will be the next block of this size to be allocated (this helps cache locality, typically).
The heap can become fragmented. Just like your hard disk, you can get into a situation where there's no room for a large allocation in the existing heap, since all the free blocks are smaller. The heap then has to grow to accommodate the large allocation, even if the total of all the free blocks is larger, because the heap cannot move allocated blocks around to find a larger free space. The best way to handle this is simply not to allocate so much memory.
I think you're looking at the Mem Usage column, and what you're seeing on the first query is simply ADO, the OLE DB provider for your database (MS Jet), your database being loaded into memory, and the heap growing to accommodate the allocations for the data. If you're using the MSDASQL provider, you've got more layers: the MSDASQL OLE DB provider for ODBC, the ODBC driver stack, and the ODBC MS Jet driver. If you are using MSDASQL you should consider using the Jet provider (Microsoft.Jet.OLEDB.4.0) instead.
I can see a few places where you're allocating more memory than you need. CString has constructors which take a wide-character (Unicode) string. Your intermediate casts to char* , when allocating csValue and m_csError , cause an extra allocation and, if you're compiling a Unicode build, an extra conversion each way. These should be unnecessary. Remember that BSTRs are always wide-character strings. If a cast is necessary, cast to LPCWSTR rather than char* .
You could also consider converting the return value of GetItem()->Value directly, rather than copying to an intermediate _variant_t . This copy operation causes the underlying string to be copied, if the variant already holds a string, which is unnecessary because you don't use the variant again after that.
Stability. What an interesting concept. -- Chris Maunder
|
|
|
|
|
Thanks for the detailed reply.
The problem here is the initial consumption is lineraly proportional to the no: of records in the tables. If I load ten times more data than I have it now, then the intial consumption goes more than 500 MB. It goes out of memory when I try to execute the queries on the database which has more data.
"I think you're looking at the Mem Usage column, and what you're seeing on the first query is simply ADO, the OLE DB provider for your database (MS Jet), your database being loaded into memory, and the heap growing to accommodate the allocations for the data. "
I didn't get what exactly you meant by the above sentence. After executing the set of 20 queries once, if I close the current database through the application and open another database of around the same size (without closing the application), still, the subsequent queries seem to consume less amount of memory.
Please let me know if you have some suggestions.
|
|
|
|
|
I have to change an Access 2000 database in a SQL Server
2000 project and its DAO code in ADO.Can you show me some links where I can find instructions about this topic cause Im experiencing a big trouble on it.Thank you
|
|
|
|
|
This[^]link has some good info in that area, but unfortuately has little to say about actual code conversion.
If the old application is DAO code, you may have significat re-write to do, since I think DAO was fairly Access specific. ADO is more generic, and I would expect that you should have little to do besides move the data and change the connection strings (unless of course, it is lousy ADO code with lots of internal database specific assumptions). I have moved ADO apps from Access to SQL and little other than connection string changes were needed.
Why would anyone waste time arguing with an accountant about anything? Their sole function is to record what happenned, and any higher aspirations are mere delusions of grandeur. On the ladder of productive contributions they are the little rubber pads at the bottom that keep the thing from sliding out from under you. - Roger Wright
|
|
|
|
|
I have to change an Access 2000 database in a SQL Server 2000 project and its DAO code in ADO.
Can you tell me some links where I can find information about this topics cause Im a beginner in this field and Im experiencing a big trouble on it.Thank you.
|
|
|
|
|
IS it possible to limit the number results returned on a TABLE?
I have a passwords table which when queried not matter the SQL statement, should only EVER return a single row. Regardless if a injection attack somehow is executed with a SELECT * FROM passwords .
So now you understand why SELECT * FROM pwd_table LIMIT 1 won't work.
I noticed there are some things I can do such as limit the number of queries/hour, but this isn't quite what I had in mind.
It's frustrating being a genius and living the life of a moron!!!
|
|
|
|
|
Like:
Select top 1 * from pwd_table
Wout Louwers
|
|
|
|
|
As Wout Louwers has said, using SELECT TOP 1 will guarantee that only one row is returned. However, that is only for one query, and will not help if, somehow, an injection attack is occurring.
What you need to do is to make sure that that the account being used to access your SQL Server does not have any rights to query or modify the table(s). If the account that is accessing SQL Server has no rights to query or modify the table they cannot attack it. (By "query" I mean a SELECT statement, and by "modify" I mean INSERT, UPDATE or DELETE).
In order to actually check a users validity, or create a new user and so on, you will need to create stored procedures for the operations you wish to perform on the table. You then give the permission to EXECUTE the stored procedures to the accounts that you need.
The way SQL Server security works is that if the user attempts to execute a stored procedure then the SQL Server with check that user is authorised to do that. If so then they stored procedure can run and, normally, no further checks are made. This means that a user can be permitted to use a stored procedure that uses tables that the user does not have access to. This protects the tables as the stored procedure can make additional checks to ensure that the data is valid before the request is performed. It also means that only the operations you specify in the stored procedures can be carried out.
I said that "normally" the security is not checked again when the system discovers that the user has permission to execute the stored procedure. There are cases where additional checks are made. One example is if some dynamic SQL is executed inside a stored procedure. Dynamic SQL is where the stored procedure takes information from the parametes or from tables in the database and constructs a new query internally in a string, then executes the SQL in the string. Dynamic SQL is a source for secondary injection attacks. These are often more problematic to trace to the source because the source of the attack can lay dormant in the database for some time before becoming activated.
I hope I've managed to answer your question, and anticipate follow up questions. Let me know if you need more.
[EDIT]
Sorry, I just realised you were talking about MySQL and not SQL Server. The principles are more or less the same for all databases and should work if MySQL supports stored procedures - It was in the pipeline the last time I looked, but I don't know if it is available yet.
[/EDIT]
Do you want to know more?
WDevs.com - Open Source Code Hosting, Blogs, FTP, Mail and Forums
|
|
|
|
|
HA HA I LIED
Good day to you all and i hope you are well.
Not a direct code question guys, just in need of advice.
Imagine this scenario if you will,
You have a internet server with a SQL Server 2000 database and a client with a MSDE installation. Both are running the same database say MYDATABASE.
MYDATABASE on server is very large, say 100mb.
The client needs one table from the server to do the work they wish to do on their local machine.
They want to just get that one table (and all related obviously) updated on their machine as they don't need the full database as that would take ages (being 100mb).
How would you suggest passing that one table to the client, the client app updating the database on the local machine. Then, once they have done their work they re-connect to the internet server pass back the data they have added/updated and the server then updates accordingly.
Any help would be greatly appreciated.
My thoughts and ideas so far - DTS (not used this before though), dynamically formed SQL (clunky, unreliable).
Pleaaaaaseeee Help
Many Thanks
Harry
|
|
|
|
|
Please refrain from using that kind of title on this forum. If you expect for people to take your post seriously, please post it with a meaningful name.
~Javier Lozano
(blog)
|
|
|
|
|
I have installed Oracle 8i Release 3 (8.1.7)...I want to install now D2K ( Oracle forms 6i ) ...On inserting the installation CD...the "Oracle installer" screen appears ..then "Oracle installation settings" form appears ..Here we have to specify
Company Name:xyz
Oracle home details
Name: OraHome81
Location: C:\oracle\ora81
Language: English
With theses entries when I click OK button, the following error message with the Title "General Information" appears:
The location specified "C:\oracle\ora81", is already used as an Oracle Home for 'Oracle 817 Production'.
It can not be used as an Oracle home for 'nt'.
The OS am using is Windows 2000 Professional with
Pentium 4 ...Please help me out...Thanking you in advance.
|
|
|
|
|
Hello
I planned to genralize a stored procedure where i can pass the table name field names as parameter to the storedprocedure along with the values and then use those information to add or update the table which is passed as parameter. My stored procedure look like this
CREATE PROCEDURE Add_Edit_GSupportTables(@AddMode bit,@GenCode int,@GenName varchar(100),@GenTable varchar(50),@GenCodeField varchar(50),@GenNameField varchar(50)) as
declare @AutoCode numeric;
if @AddMode = 1
begin
select @AutoCode = max(@GenCodeField) from @GenTable
set @AutoCode = isnull(@AutoCode ,0)
set @AutoCode = @AutoCode +1
Insert Into @GenTable values(@AutoCode,@GenName)
end
else
begin
update @GenTable set @GenNameField =@GenName where @GenCodeField = @GenCode
end
GO
but it gives incorrect syntax near @GenTable
How to do so that this stored procedure can be used as functoin to add / edit
with regards
vimal
Help in need is the help indeed
|
|
|
|
|
I'd recommend splitting this into two procedures, add and edit. The query processor will either have to build one query plan with redundant code, or, more likely, recompile the query plan every time you swap the value of @AddMode .
As for your specific problem, you can only parameterize values. You can't parameterize schema. If you have to, you've probably designed your database incorrectly. Tables are not objects. Creating and dropping tables are very expensive operations.
If you absolutely must do this, you should use string concatenation to form the schema part of the query, then use sp_executesql to execute it supplying the parameters, e.g.:
DECLARE @query varchar(2000)
SET @query = 'UPDATE ' + @GenTable +
' SET ' + @GenNameField + ' = @GenName WHERE ' +
@GenCodeField + ' = @GenCode'
EXEC sp_executesql @query,
N'@GenName varchar(100), @GenCode int',
@GenName, @GenCode
Stability. What an interesting concept. -- Chris Maunder
|
|
|
|
|
Example:
Create procedure prg.ExamleProcedure
@tabloname varchar(50),
@data int
as
declare @stmt varchar(8000)
select @stmt = 'insert into ' + @tabloname + ' (column1, column2) ' +
'select ' + @data + ' , NULL'
Exec (@stmt)
You can solve your problem like this.
abbaskaya
|
|
|
|
|
Example:
Create procedure prg.ExamleProcedure
@tabloname varchar(50),
@data int
as
declare @stmt varchar(8000)
select @stmt = 'insert into ' + @tabloname + ' (column1, column2) ' +
'select ' + @data + ' , NULL'
Exec (@stmt)
You can solve your problem like this.
abbaskaya
|
|
|
|
|
Thanks Mr Abbaskaya and Mr Mike
I have 8 tables in my database which have the same structure and so i planned to do a general function so that i can make those 8 modules as a single functions and can use parameters to make it work.
I have tried the way you have explained above but i am getting a syntax error.Could you tell me exactly how to implement that with correct syntax so that i can check it here.
Thanks a lot for your help
with regards
vimal
Help in need is the help indeed
|
|
|
|
|
If you have 8 tables with the same structure you should put them all in one table and drop the seven other ones. That would save your problem.
Wout Louwers
|
|
|
|
|
those 8 tables are same structured but due to simplicity and future oriented modifications and additions i am in a position to use 8 different tables.
with regards
vimal
Help in need is the help indeed
|
|
|
|
|
What do you mean with 'due to simplicity'. If the use of the 8 tables was that simple you wouldn't have to ask this question.;)
due to simplicity and future oriented modifications and additions i am in a position to use 8 different tables.
You sound like a manager (PHB). Not like a programmer.
Wout Louwers
|
|
|
|
|
Unable to install Oracle 8i Release 3 (8.1.7) on Windows 2000 Professional . When I insert the installation CD..the Oracle Enterprise Edition - Autorun screen appears..then on clicking on the "Install/Deinstall Products" option ...some processing occurs..and then no further steps appears...According to what I have heard,
Oracle 8i Release 3 (8.1.7) gets easily installed on Windows 2000 Professional and has some problems with XP ,then where the problem lies...Can someone one help me out...Thanking you in advance.
|
|
|
|
|