|
the VARCHAR was for the document format which is text representing the file extension.
The document itself is the param_resume_data...
|
|
|
|
|
Stating it again....
The type must be binary.
The code must reflect that.
The code in your first post does NOT reflect that.
The request was made that you post your code that has been UPDATED to use binary.
|
|
|
|
|
..and this happens for "all" files, without exception? Tried a very small picture?
If that's possible, consider the below modification;
file_stream.Write(document_binary, 0, document_binary.LongLength);
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
same problem with small pic. it's all coming to 1 byte file.
i tried changing it to your code like this:
file_stream.Write(document_binary, 0, document_binary.LongLength);
but getting this error:
The best overloaded method match for System.IO.Stream.Write(byte[], int, int) has some invalid arguments
Argument 3: cannot convert from 'long' to 'int'
|
|
|
|
|
jrahma wrote: i tried changing it to your code like this:
LongLength could be quite big; I suggest you write it in "chuncks" of int.MaxValue;
jrahma wrote: it's all coming to 1 byte file.
I hope that there's more than one byte in the database?
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
the field in the database is binary datatype with 255 length
|
|
|
|
|
255 byte? Sounds a bit small. Manual is talking about a LONGBLOB [^].
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
1.
I don't recall ever having seen a Word document that would fit in 255 bytes.
I just created a Word document containing a single letter ("a"), saved it to disk, and found a file size of 29KB. That was Word 2007 BTW.
2.
The only type that is suited for storing binary data IMO is a "blob".
MySQL offers blob, and some size variants thereof. Use those. I never used "binary".
3.
Yes, saving and retrieving data to/from a database is tricky; as long as it doesn't work, it is hard to tell where the problem lies; it could be in the saving part, or in the retrieving part. And when you have several bugs at once (I'm sure you do!) fixing any one of them doesn't seem to help at all, until you get to the last one.
The good thing is, you have to solve it only once, as it would apply to any kind of data, as long as it fits a byte array model, it is all the same.
And the best thing is, millions of people have done this before, so the solution is bound to be available everywhere you look.
|
|
|
|
|
I changed to BLOB. now it's working for small files but when i try larger files it doesn't work
i tried an image with 34kb and it was fine
i tried an image with 118 kb but half of the image was black when i downloaded it
i tried an image with 1MB but it was not downloaded properly.. less than 100 kb were downloaded from the file with no error!
|
|
|
|
|
Thank you.
|
|
|
|
|
BIG thank you
But........
Still having a problem..

|
|
|
|
|
Current code & message.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
Dear experts,
I need your helps now.
I setup merge replication from Server A to server B and Server C and then i want to do the transaction replication from server A to another server D but i got a problem:
--------
Publication cannot be subscribed to by Subscriber database because it contains one or more articles that have been subscribed to by the same Subscriber database at merge level.
Changed database context to (.Net SqlClient Data Provider)
-------------
how to resolve it?
Thanks for your helps.
|
|
|
|
|
|
Hi,
If I setup only transaction replication it is working fine but when i setup Merge replication in Server A and after that i setup one more Transaction replication then i gave me the errors as mentioned.
Actually i have one server A do merge replication to clients. And now i want to do one more transaction replication in Server A to others but it occurred errors.
can we do Merge replication and transaction replication in the server A?
---
|
|
|
|
|
Basically with merge replication when a synchronization occurs, the final state of the rows is what is merged with the other side. So if I have a stock tracking table which each stock is updated thousands of times between synchronizations only the last value of the stock will be replicated.
With transactional replication with updateable subscribers the changes (the DML) will be replicated as transactions. So if a row in our stock table is updated 1,000 times there will be 1000 indivdual transactions will be replicated.
Now updateable subscribers is being deprecated and will likely not show up in SQL 11 and peer to peer is the desired upgrade path.
So if you need transactions replicated transactionally you would want updateable subscribers, if you want bi-directional synschronization between nodes which are frequently disconnected - merge replication is the way to go.
|
|
|
|
|
hello,
in my application
I have three tables: user, admin, operator
each of these three can send a message to another
the message can be a response to a message sent by the other
or it may correspond to a command (because the user places orders to the admin and the admin can send a message on this order (Order approved, rejected, in process))
all of that concern an application of managment printing
my problem is to determinate how much I need to message tables (because there is a lot of messages (corresponding to order, response, simple message, which is sender and receiver....))
here is the image of my model
http://postimage.org/image/fezebtifp/
Can you help me or give me examples of similar cases
thank you very much
|
|
|
|
|
I wrote a simple messaging application a few years ago and used something like the following:
Message table:
MessageID -- the ID of message
ParentID -- the ID of the immediate parent message
ThreadID -- the ID of the first message in the thread
SenderID -- the ID of the sender
TimeSent -- timestamp
Content... (whatever other columns you require)
Recipient table:
MessageID -- the ID of the message
RecipientID -- the ID of the recipient
Read/Unread indicator
This allowed for multiple recipients for each message. I used GUIDs for IDs, but you could use INTs if you like.
|
|
|
|
|
I think you forget the receiver_id, and what is the utility of thread_id
|
|
|
|
|
ahmadiss wrote: I think you forget the receiver_id
That's the recipient.
ahmadiss wrote: what is the utility of thread_id
Helps find all messages in a thread quickly.
|
|
|
|
|
you are right thank you very much
|
|
|
|
|
I am trying to stop sqlserver 2005 bt unable to do so
error occured - access denied .
can some one help me out.
|
|
|
|
|
login as Administrator. and try.
Vande Matharam - Jai Hind
|
|
|
|
|
We are looking a change on the code to make it watchable and improve it's performance.
Want to change this piece of SQL query into a loop version which
Could `Select-Insert` records by 1000 items per iteration.
On each iteration want to get a print to see what was the last inserted item
and if possible get the elapsed time on the iteration.
The Code :
INSERT INTO [tData2]
(
[Key],
Info1,
Info2
)
SELECT
[Key],
Info1,
Info2
FROM
[tData]
-- Conditions were removed as weren't related to this question
your help is really appreciated.
modified 19-Jul-12 3:50am.
|
|
|
|
|
Performance will suffer if you're going to print each item individually. Insert 50 (or so) records, see if more than a second passed, and write to the terminal that you've done the next batch.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|