|
Good Morning All
i have Create a Procedure for Update, am using Procedure to update the datagrid in vb.net. well it updates well, but it does not run the last Update Statement. my Procedure is like this..
<br />
Create Procedure PrcUpdate_Pro_View<br />
(<br />
@Original_Primary_ID int,<br />
@Original_Tshwane_Prediction float,<br />
@Lis_key varchar(50),<br />
@Tshwane_Prediction float<br />
)<br />
as<br />
UPDATE Property_mass SET Tshwane_Prediction = @Tshwane_Prediction<br />
WHERE (Primary_ID = @Original_Primary_ID) And<br />
(Tshwane_Prediction = @Original_Tshwane_Prediction <br />
OR @Original_Tshwane_Prediction IS NULL AND Tshwane_Prediction IS NULL)<br />
<br />
Update Property_Mass set Locked = Null <br />
WHERE (SUBSTRING(Lis_Key, 1, 4) = @Lis_key And Locked=1) <br />
;SELECT Primary_ID, Boundary_area, Attrib_Code, Lis_Key, Func_Key,<br />
Actual_Extent, Improved_Value, Site_Value, Purchase_price,<br />
Purchase_date, prop_id, confidence_score, fin_ls_prediction, <br />
new_lis_key, new_func_key, Tshwane_Prediction, matchflag,<br />
source, flagtype FROM Property_mass WHERE (Primary_ID = @Original_Primary_ID)<br />
<br />
The Last Update does not Run, and this Update Depends on the "Lis_key" Parameter Declared above.
PLease Help
Vuyiswa Maseko,
Sorrow is Better than Laughter, it may Sadden your Face, but It sharpens your Understanding
VB.NET/SQL7/2000/2005
http://vuyiswamb.007ihost.com
http://Ecadre.007ihost.com
vuyiswam@tshwane.gov.za
|
|
|
|
|
Vuyiswa wrote: The Last Update does not Run
May I know if the following query fetch any records?
Select * From Property_Mass WHERE SUBSTRING(Lis_Key, 1, 4) = @Lis_key And Locked=1
If so,can you try executing the last part of your update statement in the query analyser to figure out if it updates?.
|
|
|
|
|
HI THANKS FOR THE REPLY
First it returns records and the Update updates the Locked Field.
i dont know why in the Procedure it cant update.
Thanks
Vuyiswa Maseko,
Sorrow is Better than Laughter, it may Sadden your Face, but It sharpens your Understanding
VB.NET/SQL7/2000/2005
http://vuyiswamb.007ihost.com
http://Ecadre.007ihost.com
vuyiswam@tshwane.gov.za
|
|
|
|
|
hi all
i want to Search for Records in this table "Property_Summary", that has a Primary key "Property_ID" and a foreign key to "Valuation" table
select PS.PROPERTY_id from Property_Summary PS
where PS.Cell_iD = 13 and Func_key like 'GEOSS%'
And from the Above Query , i want to find the Property_ID that Matches the one above but only those that have 0.0 in New_Improved_Value
Exists(SELECT 1 FROM SDE.VALUATION V
WHERE PROPERTY_ID = PS.PROPERTY_ID
AND VAL_STATUS_ID = 2
AND NEW_IMPROVED_VALUE <1
AND ARCHIVE_DATE IS NULL)
please help
Vuyiswa Maseko,
Sorrow is Better than Laughter, it may Sadden your Face, but It sharpens your Understanding
VB.NET/SQL7/2000/2005
http://vuyiswamb.007ihost.com
http://Ecadre.007ihost.com
vuyiswam@tshwane.gov.za
|
|
|
|
|
Hi all,
I want to add a string and a character to two columns of a sql database table. To do that I use a Command object rather to use the same connection on different functions. At the same time used a stored procedure to add data.
Hear are some code segments.
void CDbService::AddGData(string& fileName)
{
_CommandPtr pCom;
pCom.CreateInstance(__uuidof(Command));
pCom->ActiveConnection = m_pConn;
pCom->CommandType = adCmdStoredProc;
pCom->CommandText = _bstr_t("sp_GroupState");
pCom->Parameters->Append(pCom->CreateParameter(_bstr_t("FileName"), adVarChar, adParamInput, 50,&fileName));
}
Here is the stored procedure I used
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
go
CREATE PROCEDURE [dbo].[sp_GroupState]
@FileName varchar, @State varchar
As
Insert into tblGroupState(DiskFileName, FileState)
Values (@FileName, @State);
But this code not added any data to my table. Here is the table create query I used.
USE [RFDB]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[tblGroupState](
[DiskFileName] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[FileState] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
Can you guys see any mistake I have made. It's really appreciate.
I appreciate your help all the time...
Eranga
|
|
|
|
|
Use bebug mode and found that my variable type used to add fleName is wrong, that is adVarChar. So, what type should I used to add a string variable there.
I appreciate your help all the time...
Eranga
|
|
|
|
|
|
'IsMSShipped' means "Object created during installation of SQL Server 2005". Possible output is 1 - True, 0 - False. For more information see the SQL Server 2005 BOL.
|
|
|
|
|
I need to create a view but I need to know why my select statement is not working.
Here is the select statement. When I take out count(l.stud_id) and count (l.less_id) the query executes. In this view I need to show how many students exist for each teach_id and how many lessons exist for each teach_id.
create view view_reports AS<br />
Select l.teach_id, tp.tp_pymt, t.teach_fname, t.teach_lname, tp.tp_pymt_mon, tp.tp_pymt_yr,<br />
count(l.stud_id), count(l.less_id) <br />
From tbl_teach_pymt tp, tbl_teachers t, tbl_lessons l<br />
Where l.less_start_date < cast(getdate() as datetime)<br />
AND l.less_end_date > cast(getdate() as datetime)<br />
Group BY l.teach_id;
Here are the database tables that I am creating the view from.
create table tbl_teachers (<br />
teach_id int NOT NULL IDENTITY(51000,1),<br />
rent_id int,<br />
teach_fname text,<br />
teach_lname text,<br />
teach_add text,<br />
teach_unit_num text,<br />
teach_city text,<br />
teach_state text,<br />
teach_zip int,<br />
teach_instruments text,<br />
teach_email text,<br />
teach_phone int,<br />
teach_cell int,<br />
start_date datetime,<br />
primary key(teach_id),<br />
foreign key(rent_id) references tbl_rent(rent_id));<br />
<br />
create table tbl_teach_pymt (<br />
tp_id int NOT NULL IDENTITY(10000,1),<br />
teach_id int,<br />
tp_pymt money,<br />
tp_pymt_date datetime,<br />
tp_pymt_mon as(datepart(m, tp_pymt_date)) persisted,<br />
tp_pymt_yr as(datepart(yy, tp_pymt_date)) persisted,<br />
tp_date datetime,<br />
tp_credit text,<br />
primary key(tp_id),<br />
foreign key(teach_id) references tbl_teachers(teach_id));<br />
<br />
create table tbl_lessons (<br />
less_id int NOT NULL IDENTITY(30000,1),<br />
teach_id int,<br />
stud_id int,<br />
less_start_date datetime,<br />
less_end_date datetime,<br />
less_time as(datepart(hh:mm, ss_datetime)) persisted,<br />
less_day int,<br />
less_instrument text,<br />
primary key(less_id),<br />
foreign key(teach_id) references tbl_teachers(teach_id),<br />
foreign key(stud_id) references tbl_students(stud_id));
|
|
|
|
|
AAGTHosting wrote: cast(getdate() as datetime)
First there is no need to do this. If you had read the documentation for GETDATE()[^] you would have seen that GETDATE() returns a DATETIME already.
AAGTHosting wrote: In this view I need to show how many students exist for each teach_id and how many lessons exist for each teach_id.
It would seem to me, therefore, that tbl_teachers and tbl_lessons is superflous in this query. You are only querying information in the tbl_lessons table.
SELECT
l.teach_id,
COUNT(l.stud_id),
COUNT(l.less_id)
FROM
tbl_lessons l
WHERE
l.less_start_date < GETDATE()
AND l.less_end_date > GETDATE()
GROUP BY l.teach_id;
|
|
|
|
|
I need to have the teacher names and the payment month and year so I do need to query all three tables.
The reason that I am creating this view is because I am having to use 3 different table adapters to populate a data grid.
Is it better to use the view or to use the three table adapters to populate the data grid?
Greg
|
|
|
|
|
Here is interesting question:
I have ~1000 objects each of which is described by ~60mln records.
I need the best read performance in the following condition:
- select ~5000 records (by the key) for each object.
Which will perform faster: if I have all of these objects inside one table (~60bln records)
and run one SELECT from that table, or if I have each object inside it's own table and run ~1000 selects (for each table).
Database is either MS SQL, PostgreSQL or MySQL.
I don't care about write performance.
Database choice as well as layout will depend on the answer to this question. It'll take quite some time to model the situation, so I wanted to see if anyone has reasonable input.
Thank you!
|
|
|
|
|
There is insufficient information to give any reasonable answer. Anything would be pure guess work. You are best prototyping your situation and seeing which comes out top.
Kosta Cherry wrote: select ~5000 records (by the key) for each object.
What do you mean by that? Do you mean that you want to pass the individual key for 5000 rows to the database, or you have a range (min/max) of key values that results in 5000 rows being returned?
Kosta Cherry wrote: I have each object inside it's own table
What do you mean by "object"?
Kosta Cherry wrote: I don't care about write performance.
Have you considered using indexes rather than splitting up tables? (indexes slow down writes but can improve read performance if used correctly)
Kosta Cherry wrote: It'll take quite some time to model the situation
From what you've indicated so far there is only really one logical table, but you perhaps want to split it up into 1000 physical tables. If the situation can be modeled by only one logical table, what takes up the time in modelling?
|
|
|
|
|
Colin Angus Mackay wrote: here is insufficient information to give any reasonable answer. Anything would be pure guess work. You are best prototyping your situation and seeing which comes out top.
Kosta Cherry wrote:
select ~5000 records (by the key) for each object.
I mean SQL like this:
select * from mytable where ((timekey > X and timekey < Y) AND/OR some other conditions, but they all involve indexed columns only)
The result brings up ~5000 (or so) rows.
If all records are within several tables, I will run that query ~1000 times.
If all records are within one table, I will add into WHERE additional condition like "objectID in (select objectID from other table where ....)", or may be just a join, or whatever other condition.
The problem with modeling will be that selection from many tables will be done from multithreaded application, and I'm not sure how servers will react compared with single-threaded approach when all sits in one table and retrieved with one query; plus, those DB servers should be properly tuned for one approach or another. All of this involves a lot of modeling.
I'm kind of leaning towards the "many tables" approach from the point of maintainability (where I can have each table on different table space or even different server).
Colin Angus Mackay wrote: Kosta Cherry wrote:
I have each object inside it's own table
What do you mean by "object"?
"Object" here is just named collection of millions of similar records - like, for example, file is collection of bytes, or picture is collection of pixels.
Colin Angus Mackay wrote: Kosta Cherry wrote:
I don't care about write performance.
Have you considered using indexes rather than splitting up tables? (indexes slow down writes but can improve read performance if used correctly)
Of course I did. I just don't know which way it'll work faster. On one hand, having all records in one table increases Btree+ index depth by 1-2 levels (depending on number of keys in the leaf), which increases seek time; on another hand, having records in different tables requires additional time for parsing (1000 queries instead of 1); so it's a hard to say what will be faster in the end.
|
|
|
|
|
Kosta Cherry wrote: The problem with modeling will be that selection from many tables will be done from multithreaded application, and I'm not sure how servers will react compared with single-threaded approach when all sits in one table and retrieved with one query
A good database server will be expecting to get many queries at the same time. The only restriction is, as I recall, that you can't put down more than one query simultaneously on a single connection. So you have to open up a new connection on each thread. When you complete a query you will obviously close the connection which returns it to the pool and another thread can potentially use it.
Kosta Cherry wrote: those DB servers should be properly tuned for one approach or another.
Well, to an extent. I would guess that database servers are already tuned to having multiple queries run against it. It is the tuning of the queries to avoid things like deadlocks that you need to be concerned about. If all the queries are effectively read only then I don't see any problems in that area. The problems come when you have queries writing to the database. Since you already said write performance isn't an issue, I'm guessing that insertions and updates don't happen frequently, or happen in batches outside of normal hours.
Kosta Cherry wrote: I'm kind of leaning towards the "many tables" approach from the point of maintainability (where I can have each table on different table space or even different server).
Personally, I would have thought the many-tables (especially if they are all the same structure) approach would be a maintenance nightmare.
Also, I think you are possibly over-optimising at this stage.
Kosta Cherry wrote: "Object" here is just named collection of millions of similar records - like, for example, file is collection of bytes, or picture is collection of pixels.
Well, if all the rows are using the same structure, are they not ALL similar? Differences by a column value (or small number of column values) isn't introducing dissimilartity as far as I can see.
Kosta Cherry wrote: Of course I did. I just don't know which way it'll work faster.
In my experience you never fully know until it is running. I've implemented a system one way and optimised it. I thought I'd learned the lessons of that optimisation and attemted to apply it somewhere else, but it didn't work and some other optimisation worked better.
If you have enough data to build a prototype I would recommend doing that and experimenting. For reference, the largest database I worked on was growing at 1Gb per week. I don't recall how many rows that was, but the largest table was itself growing in the region of a million rows per day.
From that experience I found that the main bottleneck was pulling all those rows off the disk. Queries would frequently be pulling 10s, if not 100s, of millions of rows at once off the disk. In which case what you should be looking at is ensuring the disks are fast enough. A RAID set or SAN would improve performance in that area more than anything you can do.
I know this doesn't really answer your questions. It really just throws up more things to think about. But like I said earlier any answer you are going to get here will be pure guess work.
One additional thing to think about. Are any of these rows inactive? By that I mean do you have an active set of data that is used daily and another set that is only used occasionally (e.g. by reporting or archive systems)? If you do, you could optimise it along those lines and reduce the number of rows that you have to deal with on a daily basis.
|
|
|
|
|
What is the OP even asking? I got lost part way through it
"I guess it's what separates the professionals from the drag and drop, girly wirly, namby pamby, wishy washy, can't code for crap types." - Pete O'Hanlon
|
|
|
|
|
I have searched hi and low for my answer with no success. The dataadapter design gives you a way to create multiple queries. (Fill, Fillby, Update, etc.) But, I don't wan to use the designer. I want to code it all. I have the fill. That was easy. How do I create a query method using code.
Reason:
In one part on my program I am getting data from a database and binding it to a datagrid. At one point I want only certain records to show. At other times I want all records to show.
I want to be able to code like this:
if certainFlag = "true" then<br />
adapter.fill(datatable)<br />
else<br />
adapter.allfill(datatable)<br />
endif
How would I code the allfill query?
|
|
|
|
|
there is no such option in "adapter.allfill", check once.
|
|
|
|
|
Are you saying that I can't create a query using code? I can do it using the designer. I created one called insertRecord. I want to create one NOT using the designer but using code.
|
|
|
|
|
Is there anyway to Pad Left a string in SQL for MS Access? Or to retrieve a numeric field with zeros in any empty place markers? For example, I have a numeric field of size (3, 0) and if it contains the number 1 I want it to return 001. I've searched but not found anything for MS Access that seems to work.
Any help/advice would be appreciated.
|
|
|
|
|
Try:
Right('000' & MyNumberField, 3)
Regards
Andy
|
|
|
|
|
Thank you thank you thank you!
I've been circling around and around searching for the past few hours.
|
|
|
|
|
That's a very neat way of doing it.
You always pass failure on the way to success.
|
|
|
|
|
Hi
Within an update trigger I am trying to get the value of the primary key of the row that fired the trigger. I can't select the primary key using any of the other columns either since the other columns may be duplicated in rows other than the one that fired the trigger.
Thanks
There are 10 types of people in the world, those who understand binary and those who dont.
|
|
|
|
|
If you are using Sql Server, you can select the value from the deleted table in your trigger. Trust me - it sounds strange, but it does work.
|
|
|
|
|