|
It seemed like more of a discussion type of question that would fit better in a forum than on Q&A. I guess I was wrong.
What is this talk of release? I do not release software. My software escapes leaving a bloody trail of designers and quality assurance people in its wake.
|
|
|
|
|
I think you do have a point there (as to it being a discussion point) but alas, that question sounds a lot like someone's Computer Science 101 homework question.
|
|
|
|
|
Hi,
i'm looking for some good articles about webservices. (if possible for dotnet c#)
More particular :
- Security
- How to maintain / support several client versions.
- how to handle large projects. (e.g. multiple services ? how do they work togheter ?)
All information is welcome.
Thx,
Kurt
|
|
|
|
|
There is Google, and there is the CodeProject articles section, both of which have lots to offer.
Veni, vidi, abiit domum
|
|
|
|
|
Hi,
I am trying to have my project divided in layers. Here is how it's set so far:
- DAL layer: communicate with the database
- Services Layer: cummunicate with the DAL and the UI/UT layers
- UI Layer: is an MVC4 web application
- UT layer: Unit Test project
- Common Layer: has some common classes sheared accross the solution, it has actually the Enum classes.
In my DAL layer, I have:
RepositoryBase<T>: IDisposable where T: class, new() this class has CRUD methods and some others like GetAll(), Filter and so on.
public IEnumerable<T> GetAll()
In my Services layer, which has a reference to the DAL layer, I have, lets say:
ProductManager: RepositoryBase<CT_Product>.
CT_Product is an EF5 entity
It override the DAL GetAll() method and return an IList of Product, Product is a class in my Model.
public IList<Product> GetAll()
Here is my problem: in my UI layer or even the UT layer I don't want to add reference of my DAL layer, I want to only deel with Services layer but when I try to call the Services GetAll() methode I am getting an error which I resolve by adding the DAL layer and I don't like that.
Can someone help by pointing me on how to get that?
Benn
|
|
|
|
|
Hi,
I've found a workaround. Here is what I did in the ProductManager class.
I have added a private property like this:
private RepositoryBase<CH_Product> repo;
and instantiate it in the constructor
#region Constructor
public ProductManager()
{
repo = new RepositoryBase<CH_Product>();
}
#endregion
Here is the GetAll() method
public IList<Product> GetAll()
{
IEnumerable<CH_Product> dalEntList = repo.GetAll();
IList<Product> PrdList = new List<Product>();
Mapping.Mapper.Map(dalEntList,out PrdList);
return PrdList;
}
Now, the UT or UI layer will not reference the DAL layer.
The disadvantage is that, if I want to use the other RepositoryBase methods, I will have to write them all like the getAll method.
Is there any other solution?
Thanks again.
Benn
|
|
|
|
|
benndonwload wrote: I don't want to add reference of my DAL layer
Meaning what exactly? As in a Visual Studio reference? Sorry then you are out of luck.
You have to tie it together somehow and either the business service layer does it or you have to provide another layer that ties them together, for instance by passing factories, interfaces, etc.
Or you use dynamic loading in the service layer.
However without other information I wouldn't suggest doing either. One reason for it would be if you have a truly enormous project, for example with hundreds of tables.
|
|
|
|
|
Hi,
By adding reference, I mean adding a reference to a project (dll) in visual studio by doing a right click on Reference and Add.
What kind of information do you need?
Regards
Benn
|
|
|
|
|
benndonwload wrote: What kind of information do you need?
As I said - the only reason for choosing an alternative would be due to enormous complexity. For example if you had hundreds (plural not singular) of tables (tables not attributes) in your data layer.
modified 30-Aug-13 20:39pm.
|
|
|
|
|
I am in the process of creating a communications server for an application we are building. Pretty standard architecture, multi-threaded, server waits for connection from remote and starts thread to handle data transfer to and from. Not difficult and lots of examples out here. But what is rarely discussed, maybe because it is application specific, is how data is move from the read callback to the object that needs the data. I prefer to keep the functionality compartmentalized.
I have done the queue with a event/delegate implementation. That's fine if all the clients data ends up in one place for processing. But what if an object needs to be paired with a comm handler object? Use an interface? Move the business logic into the handler? That defeats the idea that this should be generic enough to be used in other projects.
All thoughts and suggestions are welcomed!
Thanks,
Doug
I am a Traveler
of both Time and Space
|
|
|
|
|
AeroClassics wrote: All thoughts and suggestions are welcomed!
Attempting to generalize from one case based on hypotheticals is seldom a good idea. The outcome is often code that is never used, overly complex, more fragile (due to complexity) and can even result in more cost when a real case arrives which is totally incompatible.
|
|
|
|
|
JSchell,
I see your point. Solving a specific type of problem this way can lead to overly complex code on some ocassions. However, this is really a specific problem that I tried to explain in a way that required the fewest words in an effort to avoid confusion.
This problem I have solved a couple of different ways and I am faced with it again. I was not completely happy with my other solutions. Having spent the majority of my career working in the Unix/Linux world these things are done differently. While I have written a lot of desktop code in the MS environment I find that I am trying to apply a Unix mindset to Windows desktop problems. I am not sure this is the best way!
So the problem remains. Regardless of where the data stream originates (pipe, TCP/IP etc) waiting for a connection and spinning off a thread to handle that communication channel is the easy part. What I find a bit more difficult to find a decent generic solution for. I am beginning to lean toward using an abstract class or an interface to put the burden on the user of the server object. This is just passing the buck so to speak. Unfortunately I also have to use this object!
This lead to the original question. What do most folks do when they need to move data from one object to another? I realize that you can just invoke a method in another object if you have a reference but that, in my personal opinion, is wrong. The server object should not know about the consumer of the data he should just make it available. Typically I toss the data into a queue and invoke an exposed event. But perhaps there is a better way?
Doug
I am a Traveler
of both Time and Space
|
|
|
|
|
AeroClassics wrote: I realize that you can just invoke a method in another object if you have a
reference but that, in my personal opinion, is wrong.
As a general statement - your conclusion is specifically wrong. Basically it condemns OO entirely as well as ignoring the historical concept of RPC and the problems with that when people decided that fine grained objects were the only way to go. (Early adopters of Java RMI experienced the same problem and that includes earlier JEE containers.)
AeroClassics wrote: But perhaps there is a better way?
If I have a specific architecture or business need for a message queuing system then I use one. I don't use it just as a whim however because of the complexity involved.
|
|
|
|
|
AeroClassics wrote: That's fine if all the clients data ends up in one place for processing. But what if an object needs to be paired with a comm handler object? Create a dictonary, pair the socket with the object that's about to handle the data
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Perhaps we are on same quest
I just posted this query. Please click here
Although I may not have as much UNIX background as you do, I do approach problems in same manner of compartmentalization and what I am essentially attempting to do is implement an UNIX tee.
|
|
|
|
|
Storing huge files without database
Hi,
I'm at my wits end. I just need some idea/brain storming here with the expert/professional.
Scenario,
I have 30 text files. (each files about 300mb-500mb)
What i need to do is to convert these files into some sort of binary and store it some where.
But not in SQL database.
I have an intention to store these files into 'look alike' container.
for example.
I have container A and container B
every each container has a cap size of 1GB.
each and every text files will move into this container (until the quota reach)
Once reach, it will move to container B, and so on ...to C, D, E....
On top of that, I will have an application to locate back these files too.
Any medium/container that i can use for this purpose?
Thanks
modified 7-Aug-13 4:04am.
|
|
|
|
|
It wouldn't be hard for you to write one. I can't think of anything that fulfills this particular feature set out of the box, but what you have asked for isn't that complicated. Effectively, you'd just create a set of arrays and fill the arrays. Obviously, you couldn't hold all these arrays in memory at once, but it's easy enough for you to fill one, discard it before moving onto the next.
A couple of thoughts - because we don't know what platform you are going to be running this on, we can't get much more specific. If, however, you are going to be running it on a Vista or later operating system, take a look at the Kernel Transaction Manager as that will help you protect the integrity of the files as you write them out because you can use transactions to support your file write.
Oh, and whatever you do, make sure that the structures you save the files to get backed up regularly.
|
|
|
|
|
Hi Thanks Chill for the reply.
It's undecided yet. either using .Net or Java.
Dependent on complexity and ease of the job.
I will pick up some info on Kernel Transaction Manager.
However, does this KTM works fine with >600 GB to Terabyte files?
Any issue or constraint it might have?
Should there be other recommendation?
I'm afraid my management may decide to reside the application on UNIX or other platform than Windows.
Then I will be in trouble of revamping the core program.
modified 7-Aug-13 4:34am.
|
|
|
|
|
Mercurius84 wrote: I have 30 text files. (each files about 300mb-500mb) What i need to do is to
convert these files into some sort of binary and store it some where.
Err....
File system already stores binary files.
File system already has a hierarchy.
File system is not a database.
Any solution, including a database, uses the file system for storage.
So exactly what is the problem?
|
|
|
|
|
Hi,
I just want to compress the files and package them into one container for a configurable size.
Any idea of doing the packaging?(not zipping)
|
|
|
|
|
Mercurius84 wrote: I just want to compress the files and package them into one container for a configurable size.
Mercurius84 wrote: Any idea of doing the packaging?(not zipping) How is "compressing and packaging" not zipping?
Use the best guess
|
|
|
|
|
I have found the solution by this product:
Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data.
Thanks
|
|
|
|
|
Mercurius84 wrote: Hadoop Distributed File System (HDFS™): A distributed file system that provides
high-throughput access to application data.
Based only what you described as your needs this is overkill.
|
|
|
|
|
Hi,
What do you mean by overkill?
This HDFS has its limitation and does not locate the processing logic power?
I have no personal experience to this product
As per summarized:
Hadoop is an Apache Software Foundation distributed file system and data management project with goals for storing and managing large amounts of data. Hadoop uses a storage system called HDFS to connect commodity personal computers, known as nodes, contained within clusters over which data blocks are distributed. You can access and store the data blocks as one seamless file system using the MapReduce processing model.
HDFS shares many common features with other distributed file systems while supporting some important differences. One significant difference is HDFS's write-once-read-many model that relaxes concurrency control requirements, simplifies data coherency, and enables high-throughput access.
In order to provide an optimized data-access model, HDFS is designed to locate processing logic near the data rather than locating data near the application space.
It sounds promising.
|
|
|
|
|
Mercurius84 wrote: What do you mean by overkill...Hadoop is an Apache Software
Foundation distributed file system and data management project with goals for
storing and managing large amounts of data.
Your stated requirements do not meet the definition of "large amounts of data".
Let me give you some examples of large data
- 2000 transactions a second sustained with a expected lifetime of 7 years and a real time need of 6 to 18 months immediate availability. Each transaction has a 1k size.
- Each originator will produce several 100 meg downloads several times a month. Sizing must expect up to 10,000 originators with a lifetime of 5 years.
|
|
|
|