|
I'm not that home in ASP, not a language I work with very often; you might have more success in the ASP.NET forum.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
I have a database set up in Amazon RDS and I can reach that from my web application using the endpoint address without a problem. However, I have just had to switch web host and the new one only allows outbound requests through a static IP address.
I have, temporarily, inferred the IP address by pinging the endpoint address and that works. However, the IP address can change without warning which is why Amazon uses an endpoint address.
The new web host has told me that Amazon RDS have a range of IP addresses and if I can get them, they will set that up for me. However, from what I've read, that does not appear to be the case.
So, am I incorrect and, if so, how do I get the range of addresses or a single, static IP address?
Or is there another way of overcoming this?
I would ask Amazon but at the account level I am at, do not have access to technical support.
Thank you.
|
|
|
|
|
1. Maybe you are on a different VPC?
2. When created the DB you had to provide a security group. If you followed the basic Amazon instruction, you linked it to the VM you had...You may have to extend/change that group...
3. Subnet of the new VM?
4. The availability zone of the DB and the VM may be different...
Skipper: We'll fix it.
Alex: Fix it? How you gonna fix this?
Skipper: Grit, spit and a whole lotta duct tape.
|
|
|
|
|
Thanks - I'll check that out.
|
|
|
|
|
I have got this assignment question on database. Not getting any idea of how to solve this. Can anyone please help me ?
========================================================
“Mr. A” and “Mr. B” data warehousing experts working for “XYZ” company, currently they are developing ETL-Validator framework for big-data technology i.e. Validating data between RDBMS (Mysql/ Oracle / DB2) and Hadoop ( HDFS/ Hive).
Source database (RDBMS) constains millions of records and all the records from source are already migrated to target database (Hadoop - Hive).
They need your help in implementing following scenario's
A. Column Level comparision between source and Target Database (i.e. Comparing each column of source Database with each column of Target Database ) .
Now your task is to :
1. Assume suitable database on source side and design table structure(student / retail banking /telecommunication / insurance , any other) for the same having atleast ten columns.
2. Assuming that buffer size = 500, propose efficient strategy to reduce the number of comparision between source and target columns records.
3. Write SQL query for the solution proposed in step#2.
4. Draw query tree for the query of step#3.
5. Write psudo code or program ( Java / C# ) for proposed solution in step#2 and step #3.
B. As foreign key constraint in not implemented in Target Database (Hadoop – Hive ) , implement foreign key validator for target database.
1. Assuming that table used in #A.1 is already present in target DB, now construct one more table on target side which references to primary key of table used in #A.1
2. Assuming that buffer size=500, propose efficient strategy with min. Comparision to validate foreign key constraint.
==============================================================
|
|
|
|
|
Nobody here is going to do your homework for you. It is set to test what you know, not what a bunch of random strangers on the Internet know.
If you genuinely don't know where to start, then talk to your teacher.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Hi All,
I have two different databases on two different servers I compared the source database with destination database, there are few things like a Foreign key doesn't exist in the target database, a couple of columns are missing. Now after finding it, I want to update the schema from Source database to Destination database without losing the data on the target database.
When I ran it, it is giving me error and saying Target database may loose data, can anybody please help me or point me, how can I achieve this?
Any suggestion, link or code snippet can be very helpful. Thanks in advance.
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
If you are changing column formats or shortening the length of a field you will often get this message. You need to asses whether the changes are actually going to impact your database eg shortening a varchar from 1000 to 500 is not relevant if the longest string is only 100 in length.
Whereas changing a data format from decimal to int may make a critical difference!
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
No I am just adding a Foreign key constraint still it errors it out, because in the new database we have added Foreign Key constraint that we want to sync it to the old database without losing the data.
Another place it is erroring out is at a View which is referencing tables from other databases and other servers as well.
Any idea how to resolve these things?
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
Hi All,
I have a DacPac file that is generated on a server, in which Views are using different Server references and I want to reference it in my Database project as a Database reference.
When I do it, it is throwing following error, can anybody please help me in it, any sort of help, a suggestion, a link or a code snippet would be very helpful.
I am generating the DacPac file from Management Studio and trying to reference it in the SSDT project, lets suppose if it is only because of the version mismatch between SSDT and Management Studio, how can we avoid this mismatch, I am using SQL Server 2008 R2, Visual Studio 2013 and SSDT 12.0, doesn any of these version matter. Any sort of help is greatly appreciated.
And another important thing is my database View are referencing other database, when I went on to that database, the views in that db are referencing another database and when I went into that database the views in that are referencing an another db, like this it goes on, if referencing other dbs is the cause of error, then what can we do to resolve it in those situations. Thank you.
The error is as below:
'C:\Users\AleemA\Desktop\SHP.dacpac' is not accessible or is not a valid schema (.dacpac) file.
Thanks in advance.
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
modified 19-Oct-15 19:33pm.
|
|
|
|
|
|
Hi All,
I few SQL scripts and in them I am using Table variables to store values from select queries then loop through that table variable to avoid duplicate entries in the tables.
Like
Declare @TabA Table(Id identity, Name varchar, Description varchar)
Insert into @TabA SELECT Name, Description FROM XXXX
Loop through Table Variable using Id and check if that Name already doesn't exist in the TabB then insert else don't.
Now I have interesting question, I am using many inserts in different blocks within same sql script. When I try to declare same Table Variable name again @TabA, it says, it already exists. I don't want to use Drop statement in my script,
1. is there any way that I can make the table variable drop automatically within same script, otherwise do I need to declare different table variable for each insert, if that's the case then wouldn't it be more stress on the RAM as many table variables would exist in the RAM until that whole script runs?
2. I am afraid to use delete on the same Table variable to use it in next insert, For example, if I have to insert into TablC table which has same set of Columns Name and Description, how does Delete works on the table variable would it create Ids from where it left in the before insert or would it create Ids from 1st again?
3 And another question is, if we are running multiple sql files at the same time, do we need to use different table variable names in all of those sql files or the scope of the Table Variable dies soon after each sql file runs?
Please answer these questions any suggestions, links and even code snippets would help me a lot, thanks in advance.
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
- Use a different name for each table variable or delete the content and reuse the existing variable
- Why are you scared to use Delete!
- Table variables are limited in scope to the current procedure.
Performance is unlikely to be an issue unless you are processing serious volumes.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Perfect, thank you very much Holmes.
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
Hi,
I am using below code to display times as Pivot table grouped by date, this is basically for fingerprint attendance... I am getting what I want like this:
2012-06-03 10:23:30,10:23:32,10:24:05,10:24:07,10:24:24,10:24:26
How can I make the comma separated values displayed in columns instead of comma so it will be something like this
created_date - time1 - time2 - time3 - time4 --- etc
this is the code:
SELECT created_date, GROUP_CONCAT(created_time)
FROM fingerprint
GROUP BY created_date
Technology News @ www.JassimRahma.com
modified 15-Oct-15 12:52pm.
|
|
|
|
|
Ok, Please HELLLLLP
I have this table with 16 rows only:
http://www.jassimrahma.com/temp/attendence_table.png[^]
and I am using below code now to split the time of attendance into columns and getting this result:
http://www.jassimrahma.com/temp/attendence_result.png[^]
but I am not happoy with it! for example, 7th July, there is only one fingerprint so it should be only F1 but it's repeating it in F1, F3 and F4.
SELECT DATE(attendance_date_time) AS attendance_date, SUBSTRING_INDEX(SUBSTRING_INDEX(GROUP_CONCAT(TIME(attendance_date_time)), ',', 1), ',', -1) AS F1,
IF(LENGTH(GROUP_CONCAT(TIME(attendance_date_time))) - LENGTH(REPLACE(GROUP_CONCAT(TIME(attendance_date_time)), ',', '')) > 1,
SUBSTRING_INDEX(SUBSTRING_INDEX(GROUP_CONCAT(TIME(attendance_date_time)), ',', 2), ',', -1) ,NULL) AS F2,
SUBSTRING_INDEX(SUBSTRING_INDEX(GROUP_CONCAT(TIME(attendance_date_time)), ',', 3), ',', -1) AS F3,
SUBSTRING_INDEX(SUBSTRING_INDEX(GROUP_CONCAT(TIME(attendance_date_time)), ',', 4), ',', -1) AS F4
FROM employee_attendance
GROUP BY DATE(attendance_date_time);
How can I fix this please?
Thanks,
Jassim[^]
Technology News @ www.JassimRahma.com
|
|
|
|
|
anybody can help please?
Technology News @ www.JassimRahma.com
|
|
|
|
|
Hi All,
I have Table A, TabB and TableC, I need to fill TableC by using Select statement and join on TableB and TableA two times. Because TableC is a many to many relationship table on TableA and TableB.
TableA(Id, Name, Desc)
TableB(Id, TableAId, AnotherTableAId, TableAName, AnotherTableAName)
TableC(Id, TableAId, AnotherTableAId, TableAName, AnotherTableAName)
Now the problem is TableA fills the new Data and TableC should be synced to TableB but should have new Ids from TableA and same TableAName, AnotherTableAName values from Old TableB rows. If I run as below, the rows aren't coming correctly, some times more rows than in TableB or some time less rows than in TableB.
Here is how I tried but failed, any suggestion or link or even code snippet would help a lot.
Insert into TableC (TableAId, AnotherTableAId, TableAName, AnotherTableAName)
SELECT TableAId, AnotherTableAId, TableAName, AnotherTableAName FROM TableB b
INNER JOIN TableA a ON a.Name = b. TableAName
INNER JOIN TableA a2 ON a2.Name = b. AnotherTableAName
Please help me with this, thanks in advance.
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
Look into UNION. Split your query into 2 by removing the second join.
Insert into TableC (TableAId, AnotherTableAId, TableAName, AnotherTableAName)
SELECT TableAId, AnotherTableAId, TableAName, AnotherTableAName FROM TableB b
INNER JOIN TableA a2 ON a2.Name = b. AnotherTableAName
UNION
Insert into TableC (TableAId, AnotherTableAId, TableAName, AnotherTableAName)
SELECT TableAId, AnotherTableAId, TableAName, AnotherTableAName FROM TableB b
INNER JOIN TableA a ON a.Name = b. TableAName
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
|
I did all of that, thanks for your help and what I felt was to use the Table variable and insert values into it, and then loop through table variable to check if that record combination doesn't exist in the table then insert the set.
I like your approach too.
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
I got it Kuldeep, thanks for your support, it is possible by using Merge and Table Variables.
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
Hi,
I am using aloha POS and they have the date for every check in separate fields and now I want to calculate the total time for the checks but unable to get the how of it..
- The date is DOB and it's datetime but I just need to extra the getdate() from it.
- The open time is OPENHOUR and OPENMIN
- The close time is CLOSEHOUR and CLOSEMIN
so basically the open time for the check will be the DATE FROM DOB + OPENHOUR + OPENMIN
and the close time will be DATE FROM DOB + CLOSEHOUR + CLOSEMIN
How can I get the total minutes for the check?
Thanks,
Jassim[^]
Technology News @ www.JassimRahma.com
|
|
|
|
|
This might give you dome ideas, demonstrating the DATEADD and Convert Date possibilities
DECLARE @DOB DATETIME = GETDATE()
DECLARE @ODate DATE
DECLARE @OHour INT = 8
DECLARE @OMin INT = 23
DECLARE @OpenDT DATETIME
SELECT @ODate = CONVERT(DATE,@DOB)
SET @OpenDT = @ODate
SELECT @OpenDT = DATEADD(HOUR,@OHour,@OpenDT)
SELECT @OpenDT = DATEADD(MINUTE,@OMin,@OpenDT)
PRINT @OpenDT
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
You can simplify slightly
DECLARE @ODate DATE
DECLARE @OHour INT = 8
DECLARE @OMin INT = 23
DECLARE @OpenDT DATETIME
declare @DOB DATE = getdate()
SELECT @OpenDT = @DOB
SELECT @OpenDT = DATEADD(MINUTE,(@OHour * 60) + @OMin, @OpenDT)
PRINT @OpenDT
=========================================================
I'm an optoholic - my glass is always half full of vodka.
=========================================================
|
|
|
|
|