|
I have a question i want to create a plugin manager, and i have a small example test-bed written. It has abstract classes.
The function CreatePluginClass will the main function to control a set of mini classes that will use the same Interface IPlugin and create a Pointer stored into memory for the new class that will eventually have to be released, so i made a auto/release type template class to do that for me CPluginObject as you can see in the example entry main function
Will this be reliable, because i kinda want to go through this type of approach and if something wrong with this can someone help me point out what it could be?, so far everything is fine that i can see but im not sure that later on that it will cause me problems, if anyone has done anything like this before. I would appreciate the helpful tips
when i tested it i get the results im looking for i guess in the console output shows
TestPlugin2::Function1
TestPlugin2::Function1
#include <stdio.h>
class IPlugin {
public:
virtual void Function1() = 0;
virtual void Function2() = 0;
};
class TestPlugin1 : public IPlugin {
public:
virtual void Function1() {
printf("TestPlugin1::Function1\n");
}
virtual void Function2() {
printf("TestPlugin1::Function1\n");
}
};
class TestPlugin2 : public IPlugin {
public:
virtual void Function1() {
printf("TestPlugin2::Function1\n");
}
virtual void Function2() {
printf("TestPlugin2::Function1\n");
}
};
#define PLUGIN_CLASS_1 100
#define PLUGIN_CLASS_2 100
void CreatePluginClass(int iid, IPlugin **pObj)
{
*pObj = NULL;
if (iid == PLUGIN_CLASS_1)
*pObj = new TestPlugin1();
if (iid == PLUGIN_CLASS_2)
*pObj = new TestPlugin2();
}
template<int N>
class CPluginObject {
public:
CPluginObject() {
CreatePluginClass(N, &pPluginObj);
}
~CPluginObject() {
if (pPluginObj != NULL)
delete pPluginObj;
pPluginObj = NULL;
}
IPlugin *GrabObject() const {
return pPluginObj;
}
protected:
IPlugin *pPluginObj;
};
void main(void)
{
CPluginObject<PLUGIN_CLASS_1> plugin;
IPlugin *pObj = plugin.GrabObject();
pObj->Function1();
pObj->Function2();
}
modified on Sunday, August 1, 2010 5:36 AM
|
|
|
|
|
Hi, I suggest to simplify.
There is no need to mix templates, virtual methods and a global factory function. Wouldn't it be simpler to let polymorphism do the work, inherit from the plugin interface and that's it. This works because virtual methods Function1() /Function2() will be called in the specific implementation (even when called via base class pointer IPlugin ).
The problem with plugins is typically memory handling, at least that is the first thing that comes in my mind. You need to make sure that dynamic memory allocation/deallocation happens in the same heap (STL strings and containers can't be used out-of-the-box). Of course, you can use allocators to get better control of memory allocation/deallocation over library boundaries. There is also a nice book called Imperfect C++ which describes these kind of problems and possible solutions.
Not sure if I answered your questions, but I hope I could give some help.
/M
PS: There is a typo TestPlugin1::Function2() and TestPlugin2::Function2() in the printf text string, that's why the debug output is other than expected.
|
|
|
|
|
I shall take a look at the Polymorphism, i had planned on eventually making a large class factory. I wanted to implement a hash library.
example
class IHashAlgo {
public:
virtual void Init() = 0;
virtual void Update(PBYTE pData, INT nLength) = 0;
virtual void Final() = 0;
virtual void GetResults(PBYTE pData) = 0;
};
Abstract Class 1, would be MD5, the results would be the digest
Abstract Class 2, would be SHA1 ..... etc
I wanted to make a cleaner class library because it got really ugly when i was doing this
I hope you understand what i am trying to describe at least
class CHashAlgo {
public:
void Init(INT nType) {
m_Type = nType;
switch (nType) {
case HASHTYPE_MD5:
MD5Init(&m_md5);
break;
case HASHTYPE_SHA1:
SHA1Init(&m_sha1);
break;
default:
break;
}
}
void Update(PBYTE pData, INT nLength) {
switch (nType) {
case HASHTYPE_MD5:
MD5Update(&m_md5, pData, nLength);
break;
case HASHTYPE_SHA1:
SHA1Update(&m_sha1, pData, nLength);
break;
default:
break;
}
}
void Final() {
switch (nType) {
case HASHTYPE_MD5:
MD5Final(&m_md5);
break;
case HASHTYPE_SHA1:
SHA1Final(&m_sha1);
break;
default:
break;
}
}
void GetResults(PBYTE pData) {
switch (nType) {
case HASHTYPE_MD5:
memcpy(pData, m_md5.digest, MD5_DIGEST_SIZE);
break;
case HASHTYPE_SHA1:
memcpy(pData, m_sha1.digest, SHA1_DIGEST_SIZE);
break;
default:
break;
}
}
int m_Type;
MD5_CONTEXT m_md5;
SHA1_CONTEXT m_sha1;
};
|
|
|
|
|
Hi,
I think most experienced C++ programmers would get what you're trying to do. You're trying to handle hashing without worrying which algorithm for hashing is bound to higher level functions.
So for composing a digital signature you need a digest (hashing) algorithm and a public key signing algorithm. So to create a digital signature you'd compose a hash and a signer :
class digest_generator
{
public:
signer( hasher &h, signer &s ) : h_( h ), s_( s )
{
}
std::string create_digest_for_stream( std::istream &str )
{
return s_.signature_of_string( h_.hash_of_stream( str ) );
}
private:
hasher &h_;
signer &s_;
};
where hasher and signer are interfaces implemented by things like MD5_hasher, SHA1_hasher and RSA_signer.
The question is why you need a plugin for this sort of thing. While it may sound cool to have an externally implementable interface to do security stuff you open yourself up for attack by giving attackers a way of replacing or modifying parts of your code.
Anyway, think very carefully before letting other people modify the way your code works in a security context. It can bite you very hard and cause a lot of damage to your reputation.
Cheers,
Ash
|
|
|
|
|
First of all why are you bothering to cook your own smart pointer class for this lot? Use std::unique_ptr or std::shared_ptr. Then when you've done that you can use the smart pointer to return objects from factory functions so you don't have to worry about pointers-to-pointers and all the attendent exception safety issues.
For a plugin I can't see any mention of how you'd dynamically load the plugin - without that all you've got is a baroque implementation of the factory method design pattern. When you want to create an object based on a plugin you need a way to find the executable with the plugin (which will have to be loaded from a configuration file), load it and then create an instance of the class you want created. One way of doing that is to map class name against the name of a shared library which has a known entry point:
std::shared_ptr<plugin_interface> create_object()
{
return std::shared_ptr<plugin_implementation>( new plugin_implementation );
}
that gets called after the plugin is loaded.
One thing to consider is that you'll have to use the same compiler AND build settings to create the plugins and host program. Otherwise strangeness with different runtime library implementations will sink you. Provided you keep the compiler the same you can use whatever types you like in the interface.
Cheers,
Ash
|
|
|
|
|
I took your advice. Hopefully i executed it correctly, here is the updated snippet. I still use CPluginObject template with the std::shared_ptr implementation because it looks better to type CPluginObject<num> than CreatePluginClass(num, classPtr) or would this be bad development schematics?
typedef std::shared_ptr<IPlugin> IPluginPtr;
<pre>void CreatePluginClass(int iid, IPluginPtr &pObj)
{
if (iid == PLUGIN_CLASS_1)
pObj = IPluginPtr(new TestPlugin1());
if (iid == PLUGIN_CLASS_2)
pObj = IPluginPtr(new TestPlugin2());
}
template<int N>
class CPluginObject {
public:
CPluginObject() {
CreatePluginClass(N, pPluginObj);
}
IPluginPtr GrabObject() {
return std::dynamic_pointer_cast<IPlugin>(pPluginObj);
}
protected:
IPluginPtr pPluginObj;
};
void main(void)
{
CPluginObject<PLUGIN_CLASS_1> plugin;
IPluginPtr pObj = plugin.GrabObject();
pObj->Function1();
pObj->Function2();
}
Eventually i want to use std::map so i dont always have to create pObj for calling other classes.
IPluginPtr GrabObject(std::string strImp)
{
if (m_ClassManager.count[strImp])
return std::dynamic_pointer_cast<IPlugin>(m_ClassManager[strImp]);
}
and to call the class
GrabObject("TestClass1")->Function1();
GrabObject("TestClass1")->Function2();
well something in the sorts of this
modified on Sunday, August 1, 2010 2:58 PM
|
|
|
|
|
Why don't you want to return a shared_ptr by value? What do you think you gain by default constructing an object and then overwriting its value? I'm not sure I see the logic to it as it makes the client code more complicated and it's less efficient.
Have a google for NRVO and RVO - they're two optimisations that are allowed to change the observable behaviour of a program by eliminating copy constructor calls around the creation of temporary objects. If you do that you'll see that they can eliminate a copy constructor call and a destructor call when you write a function when you write your code in the form:
A do_something()
{
}
compared to:
void do_something( A &a )
{
}
I'm still missing the reason why what you're doing needs to be a plugin - from what I can see you're just trying to implement a factory that's bound to one of a fixed number of possibilities at compile time.
Cheers,
Ash
|
|
|
|
|
Did you read this:
virtual void Function2() {
printf("TestPlugin2::Function1\n");
}
Press F1 for help or google it.
Greetings from Germany
|
|
|
|
|
ya i know, it was a copy and paste from the first function. it was just something a small test
|
|
|
|
|
Hi,
How can I set style of dialog as Child.
I tried but not working, code is as
CDlgTest *dlg = new CDlgTest();
dlg->Create(IDD_DIALOG_TEST,&m_ListCtrl);
dlg->ModifyStyle(0,dlg->GetStyle()|WS_CHILD);
dlg->ShowWindow(1);
|
|
|
|
|
what is m_ListCtrl? are you sure that it is the parent window for your child dialog?
dlg->ModifyStyle(WS_POPUP, WS_CHILD); would work well. If you added your child dialog from Add-Resource wizard, then from the property window, change the style propery to 'Child' from 'Popup'.Then no need to call ModifyStyle().
|
|
|
|
|
You should not do this, a dialog is a popup window as described here[^].
It's time for a new signature.
|
|
|
|
|
should not do?? can't we set a dialog (with WS_CHILD) as the child of another dialog? What else are you recommending?
|
|
|
|
|
Cool_Dev wrote: What else are you recommending?
I'm not recommending anything, merely pointing out Microsoft's notes about dialogs and whether they are child windows or popups. There is a difference in the way they operate.
It's time for a new signature.
|
|
|
|
|
ok.. that was the right context to point out it
|
|
|
|
|
My apologies, I misread the original question and thought it was a modal rather than modeless dialog.
It's time for a new signature.
|
|
|
|
|
Au contraire. A dialog makes a perfectly good child window, 'embedded' inside another window. It acts as a self-contained host for dialog controls.
|
|
|
|
|
You are right of course. Senility or brain fade hit me yesterday.
It's time for a new signature.
|
|
|
|
|
'S Ok. I should have read the entire thread before replying, as you recovered well.
|
|
|
|
|
Why do you want to modify the window style at runtime? It is easier to modify the dialog template in the resource editor and set the dialog as child there.
|
|
|
|
|
I have tried out with time interval 1 minute in vista as
NOTIFYICONDATA m_tnd;
m_tnd.uFlags = NIF_INFO;
m_tnd.dwInfoFlags = dwIcon;
m_tnd.uTimeout = uTimeout * 1000;
BOOL bSuccess = Shell_NotifyIcon (NIM_MODIFY, &m_tnd);
But in vista it always take OS settings. In OS it is set to 5 seconds and hence it disappears after 5 seconds.
Can my application control over it ? how ?
|
|
|
|
|
Hello all,
This is a macro from stdarg.h. I am trying to understand how does this work? But i got no idea.
<br />
#define STACKITEM int<br />
<br />
#define VA_SIZE(TYPE) \<br />
((sizeof(TYPE) + sizeof(STACKITEM) - 1) \<br />
& ~(sizeof(STACKITEM) - 1))<br />
In its definition, it says, it gives the size of object on the stack. I did not understand how is
this working.
Can anyone describe it for me?
Thanks
|
|
|
|
|
This macro rounds up the size of a variable type passed into variable length parameter list ...
The variable is rounded up to the size of a stack item, which will be the size of an int, 4 bytes on a 32-bit system. So even if you pass a char as one of the parameters, it will take up 4 bytes on the stack.
The extra code with the (sizeof(STACKITEM) -1) is meant to insure that if you pass in an item with size 0, you still get a 0.
This will happen on C when you pass in an empty struct or union.
C++ returns 1 for an empty class, struct, or union.
|
|
|
|
|
I assume that the expression
<br />
(sizeof(TYPE) + sizeof(STACKITEM) - 1)<br />
gives result 0 when there is no more data in stack(I may be wrong). But what about
<br />
& ~(sizeof(STACKITEM) - 1))<br />
what does that do altogether??
Thanks
|
|
|
|
|
~ is a bitwise NOT operation on a value.
So if you have x = binary 00110101, ~(00110101) => 11001010
if you use a bitwise & like this (x & (~x)) the result is 0
x: 00110101
~x: 11001010
------------
& 00000000
That would represent the case where the sizeof() returned 0.
To see how the macro will work with a different type:
(32-bit system)
sizeof(int) := 4
sizeof(short) := 2
VA_SIZE(short) =>
Start: ((2 + 4 - 1) & ~(4 - 1)) =>
Simplify: (5) & ~(3) =>
Convert To Binary: (101) & ~(011) =>
BitWise NOT: (101) & (100) =>
BitWise AND: (100) =>
Convert to Decimal: 4 bytes used on stack
The whole point of the macro is to round up the arguement to the size of stack entries, while returning 0 if the size of the parameter is 0 rather than automatically adding an empty stack item.
|
|
|
|