|
I have configured manifest file for TestB applciation as requestLevel = 'requireAdministrator' and while running this application it will prompt for admin password. when user provides this pass word then I am able to perform admin activities.
But i dont want to know admin password the end user. Please let me know is there any way in vista by which we can provide admin credentials to the processes without user intervention.
jhghjghj
|
|
|
|
|
SNI wrote: Please let me know is there any way in vista by which we can provide admin credentials to the processes without user intervention.
No, there isn't.
That is the whole point.
If the user is an admin, they will be prompted and will have to click OK.
If the user is not an admin, they will be prompted and have to enter the admin password.
This is a security feature. The only way to avoid the UAC prompts is to disable UAC. Disabling UAC is a setting that the user chooses, and you can't guarantee your users will have it disabled. You should write your program so it works fine with UAC turned on.
If you want your program to do admin tasks, you have to put up with the UAC prompts. This is just how vista works.
Simon
|
|
|
|
|
thanks. what about powershell scripting? I read that it helps in removing prompting user to provide Admin credentials. Do you have any idea?
jhghjghj
modified on Wednesday, October 22, 2008 4:33 AM
|
|
|
|
|
No. There is no way to prevent the UAC prompts from happening.
The user can disable them, but that is their choice. You cannot just have them disabled for your application.
Simon
|
|
|
|
|
I'm having a problem with the file properties Summary Tab being turned off on Zip files only. I've read some threads linking this to the recent SP3 update for XP, but I haven't found a way to re-enable the tab. The message I get on the Zip file Summary Tab (for all Zip files) is "Summary properties are unavailable for the selected source(s)."
I have a second PC running XP Professional as well and the SP3 Update didn't complete. The Summary Tab is working fine on that computer. Any one have any suggestions?
|
|
|
|
|
|
Hi,
i am a web developer and i have been asked to program a stupid batch, but i am totally ignorant in that..
I can say that i have thousands of Xml Files into a folder, i designed a XSL transformation and i need to apply the transformation to all these files.
1. I downloaded msXsl and works great
2. I made a little batch calling msXsl in this way:
msxsl.exe %1 %2 -o %3.xml
This is working and saves my source file in a modified Xml file (through my Xsl transformation).
The point is that i need another batch wrapping this one, and basically it should invoke this one and pass as a parameter the names of all the files in the folder.
Any help is really appreciated.
Dave
|
|
|
|
|
Use "Call" and the name of the batchfile to start it in the same environment, "focus" is returned to the calling batch as soon as the called batch exits.
Use "Start" and the name of the batchfile to start it in a new environment that runs independantly from the original batchfile.
|
|
|
|
|
I've got a file that is 66 GB. (it is a virtual box VDI file)
I want to move it from the drive I created it on, to another drive (with 70+ GB available).
Always, after copying between 55 to 62 GB (NOT near 2^N), it fails with a "Delayed Write Failed" error.
The target drive is set to write through the cache.
I can still see the drive through other means, so it is not dead.
I have run diagnostics, and the drive does not own up to any problems.
I have tried copying it with explorer.
I have written a program that copies, reading, writing and flushing a meg at a time.
Always the same result
I zipped it, and copied it to another machine, and tried to unzip it. (Its about 8 GB compressed)
When unzipping, it writes around 60 - 62 GB into the target, and starts having delayed write failed error.
So now we are on a totally different system, and it is generating the same type of error at roughly the same file size, on three different disks.
I have also tried unzipping it on a W2K3 machine, and that went fine.
Now I am trying to push it from the W2K3 machine to the XP Pro machine.
NTFS is supposed to handle files with more bytes than there are grains of sand (give or take 15 )
Is there some limit with XP on how big a single write session can be, or some other limit I might be hitting?
TIA
Richard
Silver member by constant and unflinching longevity.
|
|
|
|
|
|
I cna't comment on the technicalities of why it might be failing, but I can suggest a workaround:
TeraCopy[^]
A gem of a utility IMHO
|
|
|
|
|
Delayed write errors simply mean write errors that occurred when the system tried to write back from cache to the disk. The system caches writes in order to consolidate them, to maximise writing performance, and to support fast reads back from recently-written locations.
The problem is with the disk hardware, not the OS.
"Multithreading is just one damn thing after, before, or simultaneous with another." - Andrei Alexandrescu
|
|
|
|
|
Normally I would agree with you - but I have tried it on multiple hard disks, XP chokes on all of them.
I have run full diagnostics on the disks, and there are no problems with them. SMART thinks they have no issues, as well. This is something intrinsic to XP, not the drives.
Silver member by constant and unflinching longevity.
|
|
|
|
|
This is something specific to your machine/Windows installation. I've written some tests creating and copying 80GB files between drives without any issues.
|
|
|
|
|
I could see that, but they are plain vanilla XP Pro SP2 installs. One on a Dell, and one on a ThinkPad.
Both act the same. Win 2K3 does not.
Silver member by constant and unflinching longevity.
|
|
|
|
|
Do you have some anti-virus/anti-spyware software or some disk management tools, like Diskeeper, installed on these machines?
|
|
|
|
|
Dave,
Thank you.
Stupid me, of course if there is an apparent OS issue, I should have checked at least anti virus first. I must not be thinking straight to have missed checking that.
I have SmartDefrag running on both the XP machines, but not the 2k3 machine, and symantec endpoint on the thinkpad. I'll turn all them off and try again.
Richard
Silver member by constant and unflinching longevity.
|
|
|
|
|
When I start an application (details below) from a mapped network drive (e.x. "Z:\software\myapplication.exe") and while working with said application the network connection is disrupted (e.x. because I pull my network cable) then after some random amount of time my application will "vanish" - no error messages, no log entries, no nothing, simply gone. Why that?!?
The application is a pretty normal MFC based application made with VC++ 2003 using a couple of DLLs (MFC redistributables stuff and stuff you need to integrate a simplistic Subversion client into it) with everything being in the very same director ("Z:\software\myapplication.exe", "Z:\software\mfc71.dll", "Z:\software\intl3_svn.dll", etc.). The network drive itself is hosted on a Samba server and I am using a fully updated WindowsXP.
|
|
|
|
|
Are there any DLLs in the startup directory, or does it store any temp files there?
Silver member by constant and unflinching longevity.
|
|
|
|
|
In the network directory...
S:\Creator
...are the following files:
-- my application --
Creator.exe
-- DLLs needed by MFC --
mfc71.dll
msvcp71.dll
msvcr71.dll
-- DLLs needed by integrated Subversion client --
intl3_svn.dll
libapr.dll
libapriconv.dll
libaprutil.dll
libdb44.dll
libeay32.dll
ssleay32.dll
-- files being read by my application during startup, but not again later --
CreatorEditRights.dat
symbols.bmp
symbolsme.bmp
On my (local) desktop I do have a shortcut icon pointing to...
S:\Creator\Creator.exe
...whereas the working directory is...
S:\Creator
The project data itself loaded during startup of my application
are all residing in a directory on my local harddisc C:
Please note: the "DLLs needed by MFC" are of course not necessary
on my development machine but I put them there since all the users
of my application don't necessarily have the MFC 7.1 redistributables
installed. I wanted to keep it simple alias no installation required
so I put the necessary DLLs right next to my application. This itself
works perfectly on all the Windows XP machines here in the office.
At least as long the network directory is accessible all the time...
|
|
|
|
|
T.T.H. wrote: This itself works perfectly on all the Windows XP machines here in the office.
At least as long the network directory is accessible all the time...
Exactly.
I suspect it dies when it is trying to load another dll from the local directory, either originally, or when it swaps in and out.
If this is in c++, I would put a try catch around the event loop, and show message with what ever is coming back. I suspect it is an uncaught exception when a dll fails to load. Or possibly another part of the app?
I'm assuming you have the source, otherwise, you are going to have to create a simple install.
Either that, or be willing to take the hit when the network goes down. I suspect that there will be so many
other screams that this one will just raise the noise level slightly.
RM
Silver member by constant and unflinching longevity.
|
|
|
|
|
I do have the source code (it's "my" application after all) and there already is a big try/catch block around the Run() function:
int CCreatorApp::Run()<br />
{<br />
try {<br />
ReturnValue = CWinApp::Run();<br />
}<br />
((( lot of different catches here, including catch(...) )))
Unfortunately it never triggers (in the case where the network is gone). In addition when there would be an unhandled exception my operation system should tell me "hey, there had been an unhandled exception" (that default message box) - which does not trigger, too.
The problem being caused by trying to load a DLL sounds reasonable - but a) how can I find out that this is actually the cause (does Windows have any log files for such events?) and b) how to prevent it (e.x. preloading all DDLs) ?
Regarding local installations: the huuuge advantage of the "exe in network share" solution is the fact that I can update the application (which happens quite often, like once every week) and everybody is instantly working with the new version. Users are lazy and the last thing I need is them whining over bugs which would not have happened with the most current version of the application.
The whole thing ran fine the last three years, with a random, at that point unexplainable crash every half year or so, but since our network router has a hickup and the networking going occasionally away it's really became an annoyance, with "loss of work" and all such nasty things.
P.S.: thanks for your help, is really appreciated!
|
|
|
|
|
There are two sides to the dll problem - YOUR dlls, and 'system' dlls.
Your dlls are part of your app, they are compiled from code you write, and that you could change.
System DLLs are like the MFC DLLs, that will not change unless you change your development environment.
Move the system dlls to the local systems, so they are always available. Keep your dlls & executable on the network, but, like you said, force load them.
It's been years since I've done VC++/non- managed, so I don't remember any system calls for loading DLLs.
Top of my head, put a 'known' method in each of the dlls (like 'void LoadGrahicsDLL(); ' in graphics.dll). Spin off a thread that calls each of them after a minute or so, so it is not directly affecting your load time. I know there are also ways to force the dll to load by name. This link may help, and there are other CodeProject DLL loading articles as well.
Delayed DLL Loading[^]
I have not looked at this in detail, but it is C++, not sure if managed or not.
Another thing you could do is to make a directory in temp or even 'program files', and copy any dlls there when you load, and than force load the dlls from there. In affect, installing on the fly.
I guess you could even take this to the extreme, and have a small program that is the one the users think they are loading that makes sure the local copy is correct and starts the real (local) program. You just have to figure out a way to keep them from figuring out that is the real app and running it directly.
This leads to all kinds of testing scenarios to determine what would die, and at what stage it would.
All of this to say that no, now I'm not sure how to catch the exception, or if there even is an exception available to catch.
I remember that we used to have a similar problem with a local net deployed app (during testing), and just wrote it off as a cost of having network deployment. I don't recall that we did or did not get a log entry. We just told the test team that of course it would die if the exe came across the network, and they pulled the network cable out.
Silver member by constant and unflinching longevity.
|
|
|
|
|
Windows is a demand-paged operating system. It only reads pages from executables as they are referenced. If the OS has lost contact with the file system that the executable came from, when a new page that hasn't been used before (or was discarded because it hadn't been used recently) is referenced, an error may occur. If it does, Windows cannot continue executing the process and so kills it.
There are linker flags which you can use to tell the linker to mark the executable so that the OS reads the whole file into the swap file before beginning execution. Use /SWAPRUN:NET to mark for network execution and /SWAPRUN:CD to mark for removable disks such as CD-ROM. You can use editbin to mark the executable after linking.
"Multithreading is just one damn thing after, before, or simultaneous with another." - Andrei Alexandrescu
|
|
|
|
|
After quite some time I finally tried out your suggestion today and it seems to solve my problem.
Thank you so much!
|
|
|
|
|