|
Mark Churchill wrote: Having multiple source control systems is just going to lead to disaster.
This I agree with. I was surprised to see it like that. I think you are correct that we should only have one project, with not many branches or anything. We don't have multiple versions of the project... it is all one big thing, with constant updating going on - a typical web site for a large company.
The result is a fairly nice site... but from the code... you wouldn't think so. It is a heterogeneous environment, with SQL Server and Oracle, several 3rd party applications, and a mixture of Cold Fusion and ASP.Net stuff...
Here is the result:
http://www.awwa.org/[^]
This organization does a lot of good for the world, so I'm not afraid to do a little promotion. There isn't much there for the average Joe, but it's interesting to me anyway...
|
|
|
|
|
Jasmine2501 wrote: Certainly this is not hard enough to justify 500-page documents???
Well I pretty sure that document covers a lot more than just Source Control. However, in any problem solving scenario many solutions provide only partial coverage of the problem space or even introduce a new set of problems, hence the potential complexity of any given problem space.
Jasmine2501 wrote: What is the simple way to do this? My proposal is to eliminate two of the current project repositories, and only have one,
I don't know if it is "the" simple way but I agree with Mark that it is a common approach.
led mike
|
|
|
|
|
I've been studying the different variations of the MVP pattern, and I think I've come up with my own variant[^]. For now, it's just a theory, but if anyone finds it useful, I'll write up an example for it. What do you guys think?
|
|
|
|
|
I like it... but it has been tried before and it seems that in practice it never quite works out...
|
|
|
|
|
Jasmine2501 wrote: I like it... but it has been tried before and it seems that in practice it never quite works out...
Hmm...Why doesn't it quite work out? Is it because of the dependency on the IoC container?
|
|
|
|
|
I think it is because developers are too lazy to not write their own complete system.
XML was supposed to solve this problem for the web... isolate the presenter from the view...
|
|
|
|
|
It would certainly make it easier to test. It would be very powerful when mocking.
|
|
|
|
|
Hi, I am in the trouber of the error while the project trys to saving the data to child table. The error message
is shown below:
NHibernate.ADOException: could not insert: [ConsoleApplication1.user][SQL: INSERT INTO user (UNAME, NID) VALUES (?, ?)] --->
So confused that why it cannot get values to save. NHibernate I used is version 1.2. The code files are list below, please help me out here.
Thanks in advance.
app.config:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<section name="nhibernate" type="System.Configuration.NameValueSectionHandler, System, Version=1.2.0.4000,Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
</configSections>
<nhibernate>
<add key="hibernate.connection.provider" value="NHibernate.Connection.DriverConnectionProvider"/>
<add key="hibernate.dialect" value="NHibernate.Dialect.MsSql2005Dialect"/>
<add key="hibernate.connection.driver_class" value="NHibernate.Driver.SqlClientDriver"/>
<add key="hibernate.connection.connection_string" value="Server=;initial catalog=;Persist Security Info=True;User ID=;Password="/>
</nhibernate>
</configuration>
user.hbm.xml:
<hibernate-mapping default-cascade="none" xmlns="urn:nhibernate-mapping-2.2">
<class name="ConsoleApplication1.user, ConsoleApplication1" table="user">
<id name="UID" type="System.Int32" column="UID" unsaved-value="0">
<generator class="native" />
</id>
<property name="UNAME" type="System.String" column="UNAME" not-null="false" />
<many-to-one name="Nationality" class="ConsoleApplication1.nationality, ConsoleApplication1" fetch="select" cascade="all">
<column name="NID" not-null="false" />
</many-to-one>
</class>
</hibernate-mapping>
user.hbm.cs:
namespace ConsoleApplication1 {
[System.SerializableAttribute()]
public class Abstractuser {
private int uID;
private string uNAME;
private ConsoleApplication1.nationality nationality;
public virtual int UID {
get {
return this.uID;
}
set {
this.uID = value;
}
}
public virtual string UNAME {
get {
return this.uNAME;
}
set {
this.uNAME = value;
}
}
public virtual ConsoleApplication1.nationality Nationality {
get {
return this.nationality;
}
set {
this.nationality = value;
}
}
}
[System.SerializableAttribute()]
public partial class user : Abstractuser {
}
}
nationality.hbm.xml:
<hibernate-mapping default-cascade="none" xmlns="urn:nhibernate-mapping-2.2">
<class name="ConsoleApplication1.nationality, ConsoleApplication1" table="nationality">
<id name="NID" type="System.Int32" column="NID" unsaved-value="0">
<generator class="native" />
</id>
<property name="NATIONALITY" type="System.String" column="NATIONALITY" not-null="false" />
<bag name="User" inverse="true" lazy="true" cascade="all">
<key>
<column name="NID" not-null="false" />
</key>
<one-to-many class="ConsoleApplication1.user, ConsoleApplication1" />
</bag>
</class>
</hibernate-mapping>
nationality.hbm.cs:
namespace ConsoleApplication1 {
[System.SerializableAttribute()]
[System.Xml.Serialization.XmlIncludeAttribute(typeof(ConsoleApplication1.user))]
[System.Xml.Serialization.SoapIncludeAttribute(typeof(ConsoleApplication1.user))]
public class Abstractnationality {
private int nID;
private string nATIONALITY;
private System.Collections.IList user;
public virtual int NID {
get {
return this.nID;
}
set {
this.nID = value;
}
}
public virtual string NATIONALITY {
get {
return this.nATIONALITY;
}
set {
this.nATIONALITY = value;
}
}
public virtual System.Collections.IList User {
get {
return this.user;
}
set {
this.user = value;
}
}
}
[System.SerializableAttribute()]
public partial class nationality : Abstractnationality {
}
}
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
NHibernate.Cfg.Configuration cfg = new NHibernate.Cfg.Configuration();
cfg.AddAssembly("ConsoleApplication1");
ISessionFactory factory = cfg.BuildSessionFactory();
ISession session = factory.OpenSession();
ITransaction transaction = session.BeginTransaction();
nationality nat = (nationality)session.Get(typeof(nationality), 1);
user u = new user();
u.UNAME = "Pall";
u.Nationality = nat;
nat.User.Add(u);
try
{
if (!session.IsConnected)
{
session.Reconnect();
}
session.Save(u); // Error occured at this line!!
transaction.Commit();
session.Close();
}
catch (Exception e)
{
string s = e.ToString();
}
}
}
}
|
|
|
|
|
Harry Sun wrote: NHibernate.ADOException: could not insert: [ConsoleApplication1.user][SQL: INSERT INTO user (UNAME, NID) VALUES (?, ?)]
Those question-marks are parameter placeholders. You are using generator=Native... is your UID column set to Identity?
Otherwise check the InnerException property.
|
|
|
|
|
Thank you for your reply. I know I did something terrible, just because was not for sure which catalogue this question should be in. I will never do that again. Thanks again. Please forgive me.
|
|
|
|
|
A user add different kinds of shapes on a panel depending on where he clicked on the panel and a shape is display. I am calling addeShape() function when a user click on the panel. Everything is ok up to this point.
But now lets say a shaped is selected and now a user want to added another shape on the top of the existing shape the user has to add shape first on the panel by clicking on the panel, the shape appears and then the user has to drag on the location he wants it to be.
What i want to do is even if an existing shape is selected and a new shape is begin added on the top of the existing shape I want to be able to add on the top of it. Rather than clicking first on the panel and then drag on the desired location.
How can i solve this puzzle. Any input will be highly appreciated.
Thansk
|
|
|
|
|
Hi,
I'm working on my second project at the moment that makes use of a 3-layer model. On my first project I had circular dependencies so that the data layer could hydrate and return a business object and take one as a parameter to save it back to the database. This worked fairly well although it was rather complicated.
This time round I'm creating the data layer so that it takes individual values of a business object - strings, integers, etc - in order to save the objects, and it similarly returns the individual values through reference parameters to allow the business layer to hydrate objects.
The second method seems to be making everything a lot simpler, however I've run into a little problem. With the first method when I wanted to get a collection of business objects the data layer would just create a collection and add each object into that before returning it. With the less coupled second method I cannot do this and must rely on the business layer to create a collection based on the values returned by the database. The problem I'm facing is that, without creating extra classes in the data layer just to store records from the database, I can't see a way of getting the data back to the business layer.
I've read as much material on data layers as I can find, but none of it seems to cover this sort of thing. Has anyone got any ideas how to overcome this without creating extra classes which mimic those in the business layer?
Regards,
Matt
|
|
|
|
|
CaptainMatt wrote: The second method seems to be making everything a lot simpler, however I've run into a little problem.
Have you not recognized the seeming contradiction in your statement? Systems will have complexity if they do anything much at all, period. There is no getting away from it. You might find Grady Booch's Turing lecture[^] from last year interesting.
All you have done is abandon an object oriented design for a procedural design. With that will come all the well known problems of a procedural solution. Your previous experience with complexity may seem like a cake walk by the time you are done.
led mike
|
|
|
|
|
Thanks for the reply,
I realise that a completely object oriented design is most probably the correct way to go. However, half of the material I've read on creating data layers recommend using the procedural approach. I thought it might be a good idea to try it out and see how it goes, apparently not too well though.
Since my first message I've made a start on re-writing the layer using objects. At the same time though I'm still interested in how the people who have taken this route have done this sort of thing or if extra data-only objects are used.
Another reason my previous object oriented design was rather difficult to work with was that I initially followed an article that created persistence objects for every business object. A single persistence/database/storage object to persist the whole system makes things easier.
Regards,
Matt
|
|
|
|
|
While my experience in developing apps that have Datalayers is extensive, I have never extensively studied the problem domain. What I have picked up from direct experience and random discussions and articles is that the work and/or complexities associated with this domain have not been eliminated.
led mike
|
|
|
|
|
The common way people end up writing their DAL seems to be creating a procedural CRUD interface, which hits stored procedures in the DB ("for security" / "for performance"). This is generally done because people aren't leveraging the metadata features of the language.
Also people like to use the Enterprise Data Access Block. So much so that they use it for general data access - when its really just a database abstraction kit - when they have no plans to ever switch database vendors. That ends up locking them out of a whole heap of cool tools :P
In .Net at least, its possible to make a generic persistance layer that can operate on any business object which is correctly tagged up with persistance attributes. With a smattering of generics you can have a single persistence layer that isn't aware of your business objects.
The way we do this is using reflection to read attributes that map the object model to the database schema. Reflection does have a small performance overhead - but you can make up for that by flexible queries - for example joining on referenced records in one round-trip. It also lets you provide generic FindByX routines, etc.
So, yes, it is possible - and its a lot better than having to keep a whole bunch of structures that are only used for passing data between tiers maintained
|
|
|
|
|
The problem with a completely object-oriented approach is that it doesn't work so well for a server scenario. Getting a bunch of data and instantiating a huge collection of instances is sort of useless if all you'll do with those instances is to serialize them and send off the data as, say, XML to a client somewhere, then discard all the objects. Likewise, deserializing a stream of (updated) data to a collection of objects and then use reflection or some other run-time schema mapping is rather useless.
In a desktop scenario it sure is nice. Bind the grid to your objects and when the user changes something, modify the instances you've already loaded. When he saves, call Save(). It's great. But it doesn't work like that on a server, because you can't keep all those objects alive between the user reading the data and wanting to save.
Which is why there is so much talk about "service" orientation. They say its "stateless objects" but without state it isn't really an object at all! Object-orientation is all about encapsulation. Service-orientation sacrifices that, because there's hardly any point protecting the state of an object that is used as little more than a serialization/deserializtion mechanism for SQL Server data...
|
|
|
|
|
Hi,
I'm not quite sure I get what the problem is, but I'm attempting to give input based on the assumption that the problem is duplication of the same logic in the data access and busienss logic code. If so, a possible solution might be to derive the business classes from the data access classes. That is, make your data access classes so they contain all data storage for the type - as protected fields - and the knowledge of how to read/write those fields to and from persistent storage. Then derive business classes from these and publish those parts of the data that should be possible to directly manipulate from the outside using properties, adding validation logic as appropriate. (Some validation might be more appropriate in the DAL; there is a grey zone here. "Name cannot be more than 50 characters" might be considered data access logic or business logic; at first glance it only depends on the data store and as such could be considered DAL logic, but then again it might actually affect the layout of reports and what have you. In my view, it is simpler to keep all validation in one place, and if so it is undoubtedly the business layer that should do it.)
|
|
|
|
|
Hi gurus, I'm creating a web application for the internet with scalability in mind. In my application, I'll have forums used by different groups of people (each group will have their own forums). I will start from 1000 or 2000 groups with the potential to grow until 5000 groups (it's guaranteed that I won't exceed that number of groups, 5000, based on the nature of my application). I was thinking that having all the forum posts in one table will cause many problems like having a very large index and slow search among other problems (as I want to have many indexes on the table like the PostID, ForumID, GroupID and PostDate), so, I was thinking that it might be better to have a separate table for each group's forum posts in order to have small indexes so that I have faster searches and inserts take less time (esp. that some groups are expected to have a large number of posts per day) and also to be easier to move the tables to other database servers in case the application grows and so the web farm. Now I'm really confused how to design my DAL.
Assuming that the structure of the forum posts table is like that (this is just a simplified structure not the real one):
CREATE TABLE ForumPosts_x
(
PostID INT IDENTITY(1,1) PRIMARY KEY,
ForumID INT NOT NULL, -- which refers to the ForumID in another table named Forums which includes all forums for all groups
ParentPostID INT NULL,
PostSubject NVARCHAR(200) NOT NULL,
PostText NVARCHAR(5000) NOT NULL,
PostDate DATETIME NOT NULL DEFAULT GETUTCDATE()
)
Note that there's no group ID as x in the table name will be the group id e.g. ForumPosts_19 for group id 19
Now as to designing my DAL, should I:
1. Create stored procedures with dynamic SQL and pass the group id to the procedure i.e. use EXEC and sp_executesql (there's an interesting article on the subject here: http://www.sommarskog.se/dynamic_sql.html)
For example:
CREATE PROCEDURE InsertForumPost
@GroupID INT,
@ForumID INT,
@ParentPostID INT,
@PostSubject NVARCHAR(200),
@PostText NVARCHAR(5000)
AS
DECLARE @tablename NVARCHAR(50), @sql NVARCHAR(4000)
SET @tablename = N'ForumPosts_' + @GroupID
SET @sql = N'INSERT INTO dbo.' + quotename(@tblname) +
' (ForumID, ParentPostID, PostSubject, PostText) VALUES (' +
'@ForumID, @ParentPostID, @PostSubject, @PostText)'
EXEC sp_executesql @sql, N'@ForumID INT, @ParentPostID INT, @PostSubject NVARCHAR(200), @PostText NVARCHAR(5000)', @ForumID, @ParentPostID, @PostSubject, @PostText
2. Create the procedures with static SQL for each group. i.e. each group has its own set of procedures which would make us have a large number of procedures
For exmaple, the procedure for inserting a new forum post for GroupID #19 would be:
CREATE PROCEDURE InsertForumPost_19
@GroupID INT,
@ForumID INT,
@ParentPostID INT,
@PostSubject NVARCHAR(200),
@PostText NVARCHAR(5000)
AS
INSERT INTO dbo.ForumPosts_19
(ForumID, ParentPostID, PostSubject, PostText)
VALUES
(@ForumID, @ParentPostID, @PostSubject, @PostText)
3. Use SQL text directly in my code, C# in my case (which I'm highly considering but a little concerned about how to execute multiple SQL statements - as you have in stored procedures - without having to call ExecuteNonQuery() multiple times, which I believe could affect performance)
4. Drop the whole thing and stick to using one table for all the groups
What would you do if you were designing such application? Any suggestions are highly appreciated...
|
|
|
|
|
Trust your database server. In general having loads of indexes will slow your inserts, not your selects. You won't need to do any partitioning unless you have a _lot_ of data.
You can send multiple queries and get multiple responses in one round trip.
The stored procedures aren't giving you any benefit here - its just creating noise.
If I was designing this app i'd probably go with a Thread table inheriting Post, and giving Post a reference to the parent Thread (and possibly also the Post that was replied to - if you wanted to track that). Add your Thread table referencing Group as well for your groups. That lets you pull whole threads out off one index in one select, and Threads for each Group. Then I'd use Diamond Binding to handle my DAL...
|
|
|
|
|
Mark Churchill wrote: Then I'd use Diamond Binding to handle my DAL
ROTFLMAO You so had me until that part
led mike
|
|
|
|
|
Well *I* don't have to weigh up cost/benefit because I can click a button and get a license
|
|
|
|
|
Hi Mark, thanks a lot for your help, but would you mind explaining in more details?
Mark Churchill wrote: You can send multiple queries and get multiple responses in one round trip.
Well, actually I doubt this could be of much benefit in my case as I'm creating a web application so my queries will always be based on user actions so there's no way to send multiple queries at the same time in my case.
Mark Churchill wrote: The stored procedures aren't giving you any benefit here - its just creating noise.
Could you explain more please? I understand that you want me to go with one table so why not use sprocs in that case? We won't have the problem of caching query plans for every table as we're going to use only one table.
Mark Churchill wrote: If I was designing this app i'd probably go with a Thread table inheriting Post, and giving Post a reference to the parent Thread (and possibly also the Post that was replied to - if you wanted to track that). Add your Thread table referencing Group as well for your groups. That lets you pull whole threads out off one index in one select, and Threads for each Group. Then I'd use Diamond Binding to handle my DAL...
I'm a little lost here, what exactly do you mean by inheritance here? I was going to use a ParentPostID INT NULL field in the ForumPosts table (see the CREATE TABLE section in my original post) which will refer to the thread, is that what you mean? I was also going to index that field so that I can find threads and replies fast if this what you mean by pulling out the threads.
So I was exactly going to index the PostID, GroupID, ParentPostID and PostDate fields (this is why I thought about separating the data into tables, one table for each group, as I thought I would have too many indexes, 4 indexes as you can see, and this could affect the inserts performance tremendously)
By the way, if one table will be used, the PostID will be by group not an identity field. I was only going to use identity fields if had a table for each group but not with a single table (this is done for scalability sake, to be easy to move data to other databases or even tables with the same structure in the same database), I'll have another table with the last id used for every group e.g.
CREATE TABLE LastUsedID
(
GroupID INT NOT NULL,
LastUsedID INT NOT NULL
)
Thanks again for all your help, waiting for your reply...
|
|
|
|
|
I wrote: You can send multiple queries and get multiple responses in one round trip.
This was in reference to "...which I'm highly considering but a little concerned about how to execute multiple SQL statements..." - this isn't anything to worry about. A query isn't restricted to multiple statements - so you can send "select foo; select baz" in one ExecuteDataSet() and get back a DataSet containing multiple DataTables.
I wrote: The stored procedures aren't giving you any benefit here - its just creating noise.
The stored procedures were just performing basic CRUD operations. SQL server will cache the execution plan for your ad-hoc queries anyway. For a simplistic view, stored procedures are for providing abstraction/code reuse rather than performance (some would also say they help with security).
Waleed Eissa wrote: I'm a little lost here, what exactly do you mean by inheritance here?
A thread is basically a post that also has some extra information, like a title, a group it belongs in, etc. Say your database structure has a table, Product (Id, Description) and ServiceProduct(Id, CostPerHour) with ServiceProduct.Id being a foreign key to Product.Id. This defines that for every set of ServiceProduct data there is Product data, meaning ServiceProduct inherits Product, which is pretty analogous to how inheritance relationships work in code. This can be handy.
Using a ParentPostId field makes it difficult to pull a whole thread out the database. Given a parent post I would have to do an index scan to get the 2nd post, then again to get the 3rd, etc.
Say you have this setup:
Post (Id, Author, BodyText, TimePosted, ParentThreadId) and Thread (Id, Title, GroupId)
Thread.Id is a fk to Post.Id (inheritance)
Post.ParentThreadId is a fk to Thread.Id (reference)
This means that I can easily select threads in a group (Thread by GroupId), Posts in a Thread (Post by ParentThread).
If you are feeling uncomfortable with the inheritance relationship, then you could just have a Thread table that acts as a bit of a stub to group posts.
It might be worth having a look at how forums like phpbb handle their database structure (considering I'm coming up with this on the fly).
I'm not comfortable with the LastUsedId. It seems incredibly unlikely you would approach the 4 billion odd posts that just an int would provide. SQL Server could handle that kind of indexing using the processor in my phone - you'd be out of disk before you ran out of primary keys - and if you need to partition, just move everything out by GroupId - having holes in your index isn't an issue.
Insert performance isn't an issue for you - your users are reading and searching much more than they are posting. The more indexes the better - every millisecond you spend updating an index is going to save you a hundred of net lookup time
|
|
|
|
|
Mark Churchill wrote: I wrote:
You can send multiple queries and get multiple responses in one round trip.
This was in reference to "...which I'm highly considering but a little concerned about how to execute multiple SQL statements..." - this isn't anything to worry about. A query isn't restricted to multiple statements - so you can send "select foo; select baz" in one ExecuteDataSet() and get back a DataSet containing multiple DataTables.
Well, I'm sorry, I probably should've explained it more clearly, actually what I meant here is that when you use stored procedures you can easily use many sql statements in the same procedure
for example:
begin transaction
insert into foo
update foo2 set ..
.. etc
This is very easy with stored procedures but I guess not so easy with ad-hoc sql statements, this is what I meant to say
Mark Churchill wrote: A thread is basically a post that also has some extra information, like a title, a group it belongs in, etc. Say your database structure has a table, Product (Id, Description) and ServiceProduct(Id, CostPerHour) with ServiceProduct.Id being a foreign key to Product.Id. This defines that for every set of ServiceProduct data there is Product data, meaning ServiceProduct inherits Product, which is pretty analogous to how inheritance relationships work in code. This can be handy.
Using a ParentPostId field makes it difficult to pull a whole thread out the database. Given a parent post I would have to do an index scan to get the 2nd post, then again to get the 3rd, etc.
Say you have this setup:
Post (Id, Author, BodyText, TimePosted, ParentThreadId) and Thread (Id, Title, GroupId)
Thread.Id is a fk to Post.Id (inheritance)
Post.ParentThreadId is a fk to Thread.Id (reference)
This means that I can easily select threads in a group (Thread by GroupId), Posts in a Thread (Post by ParentThread).
If you are feeling uncomfortable with the inheritance relationship, then you could just have a Thread table that acts as a bit of a stub to group posts.
It might be worth having a look at how forums like phpbb handle their database structure (considering I'm coming up with this on the fly).
I like your idea about having a separate table for threads, I think this can speed things up as we can have less indexes on the same table, it might just be harder to maintain though as you have the data in two tables but I still like the idea.
Mark Churchill wrote: I'm not comfortable with the LastUsedId. It seems incredibly unlikely you would approach the 4 billion odd posts that just an int would provide. SQL Server could handle that kind of indexing using the processor in my phone - you'd be out of disk before you ran out of primary keys - and if you need to partition, just move everything out by GroupId - having holes in your index isn't an issue.
Actually this wasn't meant for avoiding reaching the maximum limit for int as it's large enough (by the way the max is 2 not 4 billions as int is signed), it's intended for scalability to make it easier to move data to different databases (assuming a group has way too many posts and it's making the table too large so you move the data of this group into a separate database), so that you don't have to worry about the correct value of the seed for the identity field.
Thanks for all your help...
|
|
|
|
|