|
How do you know?
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
SQL server database 824 error means that the page has been read successfully from the disk but there is some issue with the page. Other types of messages that might be shown are “fatal error 824 occurred” , “SQL server detected logical consistency based i/o error”. The exact display message can be more helpful in understanding this problem.
|
|
|
|
|
I'm running an ASP.Net WebAPI that reaches into a MongoDB on the server.
When adding an entity, how would you return the newly added entity Id (Guid) to the client side?
I could return the Entity that was passed in with the new Id on it, but tha seems excessive. Would you return the Guid?
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
If you're building a RESTful CRUD application, then don't return either. Return a 201 status and let the client system re-query as necessary/appropriate.
If it absolutely damages your soul not to return something other than a status, then just send back the ID, but you shouldn't really be serving up anything from a PUT or POST, let alone an entity.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
How would the client know what to query? What if I call AddEntity with two identical entities? They both end up with unique Id's, but everything else could be the same.
Or, multiple clients all add records at the same time? Requerying doesn't tell you which one YOUR client added.
And, truthfully, it seems excessive to requiry just because an Add was done. Another round trip to the server?
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
modified 31-May-19 13:58pm.
|
|
|
|
|
Actually, you've pushed me back to the RFC and, my bad, including the ID with a 201 is the right answer. I need to go revisit some work that I just did :/
HTTP/1.1: Method Definitions
As far as multiple adds, that's a modelling issue. If it's allowable, use PUT. If it's not, use POST and make sure your model checks against field combinations that should be unique. Worst case scenario, use a wrapper with a unique identifier (such as a transaction ID) to avoid multiple inserts.
All that said: when working with REST you should not think of efficiencies in terms of round trips. REST apps are chatty. Instead, efficiencies lie in conformance to standard providing a consistent interface, and providing ways to query and slice up collections effectively.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
OK, but as far as this...
Nathan Minier wrote: As far as multiple adds, that's a modelling issue.
What I meant was there could be any number of clients adding records at the same time. If you had to requery after YOUR instance did an add, you could get back any number of records added by ANY client instance since you did the add. You would have no way of kowing what record YOUR client added versus other clients.
Not sure what Modeling has to do with this
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
We're talking cross purposes then. As I said in the last post, I was mistaken about returning the ID of the added object along with the 201 code. That's why I posted the link for the standard as well.
I assumed you were talking about multiple clients adding unique entities from the way that you worded it. That would be a modelling issue.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
If it were me, I'd return the guid so the client side could requery as needed. If you're somehow persisting the newly added data (in a session var?), i'd return the guid and add it to the data.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
It's a WPF app, which creates a new entity and sends it to the API. When the API call returns, I am now passsing back the Guid and assigning the new entity.
Thanks
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
Hi,
I have debugged a stored proc in Sql Server, which was fairly easy to do.
I am trying the same with debugging Oracle procedures, being fairly new having some difficulties. I have researched how to output a cursor value, with no luck.
Also, the sql is constructed, and then used in cursor to output results to crystal report.
My final sql looks like this, which is a whole bunch of variables:
ssql := s_SEL||s_FROM1||s_WHERE||s_WH_PER||s_WH_CC||s_WH_VEN
||s_UNION||
s_SEL||s_FROM2||s_WHERE||s_WH_PER||s_WH_CC||s_WH_VEN
I tried with no luck:
execute immediate ssql
I just want to see the values of the sql statement.
Any help is much appreciated!
|
|
|
|
|
Either use PRINT ssql;
or SELECT ssql FROM DUAL;
Really depends on how the rest of the procedure looks like.
|
|
|
|
|
Thank you for your reply.
I did the following:
==
execute immediate ssql;
SELECT ssql FROM DUAL;
==
I got the following:
[Error] Execution (313: 1): ORA-06550: line 313, column 1:
PLS-00428: an INTO clause is expected in this SELECT statement
|
|
|
|
|
In my opinion , if you can create log table in your database.
A easy way is insert the sql into your log table.
e.g.
Create log table:
CREATE TABLE SQLLOG_DEBUG
(
SQL_LOG VARCHAR2(4000 BYTE)
)
Insert the sql into your log table.
INSERT INTO SQLLOG_DEBUG (SQL_LOG ) VALUES (ssql )
*This is just a sample, I didn't test the code.
If the sql is big , you can change SQL_LOG's datatype into clob.
You can also use DBMS_OUTPUT.PUT_LINE and redirect the output to a file.
The following page shows how to do it.
[stackoverflow.com]
modified 17-Jul-19 1:00am.
|
|
|
|
|
I test the mysql_real_query API, I just loop to execute Sql syntax ,like 'UPDATE ** SET **' there is a leak memory bug occur. when I use 'top' to check the bug, I find the system 'used memory' option will always growing until the system or process crush. but 'mysqld' and 'testsql' processes's %MEM option has not increase, System free memory look like disappear. I try to force kill the 'testsql' process but the memory still be used and can not be release. Why? Please help me.
int ThreadExeSQL(MYSQL* lpSQLConn, char * sql, int iLen)
{
if (mysql_real_query(lpSQLConn, sql, iLen))
{
MYSQL_RES* lpGetSQLRes = mysql_store_result(lpSQLConn);
mysql_free_result(lpGetSQLRes);
return -1;
}
MYSQL_RES* lpGetSQLRes = mysql_store_result(lpSQLConn);
mysql_free_result(lpGetSQLRes);
return 0; }
void* ThreadSQL_HexWrite(void* lpGet)
{
LPThreadParam getParam = (LPThreadParam)lpGet;
MYSQL* lpSQLConn = (MYSQL*)&getParam->lpSQLConn;
int iThreadIdx = getParam->iThreadIdx;
printf("ID:%d\n", iThreadIdx);
mysql_thread_init();
lpSQLConn = mysql_init(NULL);
if (!mysql_real_connect(lpSQLConn, g_host_name, g_user_name, g_password, g_db_name, g_db_port, NULL, 0))
{
ThreadSQLError(lpSQLConn, NULL);
return;
}
else
{
printf("mysql_real_connect OK!\n");
}
for (int i = 0; i < 1000000; i++)
{
char lpCmdStr[8192] = "\0";
sprintf(lpCmdStr, "update %s set %s=0x%d where id=%d\0", "tb_Data", "Info", i, 1);
if (ThreadExeSQL(lpSQLConn, (char*)lpCmdStr, strlen(lpCmdStr)))
{
MySQLError getError = ThreadSQLError(lpSQLConn, NULL);
HandleMySqlError(getError);
continue; }
else
{
printf("ok. ");
}
usleep(1000 * 10);
}
mysql_close(lpSQLConn);
mysql_thread_end();
printf("ThreadSQL_HexWrite OK!\n");
}
MYSQL* g_MySQLConnList[100];
void main()
{
if (mysql_library_init(0, NULL, NULL))
{
printf("could not initialize MySQL client library\n");
exit(1);
}
int thread_num = 1;
{
pthread_t *pTh = new pthread_t[thread_num];
for (int i = 0; i < thread_num; i++)
{
LPThreadParam lpSetParam = new ThreadParam;
lpSetParam->lpSQLConn = (MYSQL*)&g_MySQLConnList[i];
lpSetParam->iThreadIdx = i;
printf("---create thread idx:%d\n", i);
if (0 != pthread_create(&pTh[i], NULL, ThreadSQL_HexWrite, lpSetParam))
{
printf("pthread_create failed\n");
continue;
}
}
for (int i = 0; i < thread_num; i++)
{
pthread_join(pTh[i], NULL);
}
delete[] pTh;
}
mysql_library_end();
printf("All Done!\n");
}
modified 23-May-19 9:38am.
|
|
|
|
|
Doesn't seem like the code is complete.
normga wrote: MYSQL* g_MySQLConnList[100];
That is a list of uninitialized pointers.
Where do those pointers get set to actually point to something?
|
|
|
|
|
yes.
g_MySQLConnList fill with like 'new MYSQL[100]';
In fact, in my testing environment the code always query mysql successful, but leak memory.
|
|
|
|
|
Perhaps you missed my point.
You have a list of pointers. Nothing more. The pointer must be set to point to something. Where does that happen?
normga wrote: the code always query mysql successful
That doesn't mean anything. Code can run successfully, sometimes, even with uninitialized pointers. It depends on how the memory is laid down.
|
|
|
|
|
So I need to regularly update a table with data from another table.
The problem is that if I update the normal way I get a table lock on the target table for half an hour, which is frowned upon by the users. So I need to run the update in batches.
The other problem is that the ID sequence is having gaps in it. Larger gaps than the batch size.
At the moment I have this solution:
DECLARE
@LastID int = 0,
@NextID int,
@RC int = 1;
WHILE (@RC > 0)
BEGIN
SELECT TOP 5000
@NextID = s.id
FROM Source s
WHERE s.id> @LastID
ORDER BY s.id
;
UPDATE t
SET
FROM Source s
JOIN Target t ON t.id = s.id
WHERE s.id > @LastID
AND s.id <= @NextID
;
SET @RC = @@ROWCOUNT;
SET @LastID = @NextID ;
END
Which works just fine, but using two selects is getting under my skin.
Any better suggestions for how to do it?
|
|
|
|
|
How about something like:
DROP TABLE IF EXISTS #ProcessedIDs;
CREATE TABLE #ProcessedIDs (id int NOT NULL Primary Key);
DECLARE @RC int = 5000;
WHILE @RC = 5000
BEGIN
UPDATE TOP (5000)
T
SET
...
OUTPUT
inserted.id INTO #ProcessedIDs
FROM
Target As T
INNER JOIN Source As S
ON S.id = T.id
WHERE
Not Exists
(
SELECT 1
FROM #ProcessedIDs As P
WHERE P.id = T.id
)
;
SET @RC = @@ROWCOUNT;
END;
DROP TABLE IF EXISTS #ProcessedIDs;
NB: The DROP TABLE IF EXISTS syntax is new in SQL Server 2016. If you're using an earlier version, you'll need to use an alternative syntax[^].
The OUTPUT clause should work in SQL Sever 2005 or later.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Ah, yes!
I always tend to forget the output clause.
Thanks!
|
|
|
|
|
Using a WHERE NOT EXISTS turned out to be very slow because of the antijoin using index seeks for every row.
I changed it to WHERE ID > (SELECT ISNULL(max(ID),0) FROM @ProcessedIDs) which allows an index scan.
This is magnitudes faster than the original nonbatched update.
The question is how to use this with composite keys?
|
|
|
|
|
When you use the TOP clause with the UPDATE statement, there's no guarantee that the rows to update will be picked in any particular order. Using the MAX(id) option, you could end up missing rows.
I notice you've replaced the temporary table with a table variable. Was there a reason for that? IIRC, execution plans for table variables tend to assume they contain a very low number of rows, which might explain the poor performance.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Richard Deeming wrote: When you use the TOP clause with the UPDATE statement, there's no guarantee that the rows to update will be picked in any particular order. Using the MAX(id) option, you could end up missing rows.
I know, and you can't add an order by to an UPDATE or INSERT.
But you can put the SELECT with TOP and ORDER BY in a CTE.
Richard Deeming wrote: I notice you've replaced the temporary table with a table variable. Was there a reason for that?
No particular reason. I like to keep the scope as local as possible, so it's mostly a habit.
Richard Deeming wrote: IIRC, execution plans for table variables tend to assume they contain a very low number of rows, which might explain the poor performance.
Table variables don't have statistics, which obviously could affect the plan, but since all ID's are unique I don't think it would make a big difference in this case
But I will test it.
<edit>Oh, and table variables can't go parallell, which obviously can affect performance a lot in this case.</edit>
|
|
|
|
|
Done some testing now.
And as I suspected, there is no difference in either performance or plan as long as the temp table has one column with unique values.
Until the query goes parallel that is. Then the difference is quickly getting huge.
But as long as I'm batching the query it stays the same until the batch is big enough to go parallel (which happens between 10000 and 20000 rows in this case). But then I will also get a table lock.
And oddly enough, it also goes much slower when parallel until reaching 100000 rows per batch. I will do some more testing on this.
|
|
|
|
|