|
For those who have nVidia cards here's what's needed to ensure you have the right drivers.
- Make sure you have the CUDA 11.7 drivers and CUDA Toolkit 11.7 installed.
- Download and run our CUDnn and ZLib script.
1. Download CUDnn https://developer.nvidia.com/rdp/cudnn-download. You need to sign up, sign in, verify your email and answer a survey, but you'll then get a file similar to cudnn-windows-x86_64-8.5.0.96_cuda11-archive.zip
- This file is v8.5, so create a folder "C:\Program Files\NVIDIA\CUDNN\v8.5" and extract the zip into that folder. There will be bin, lib and include folders, plus a LICENSE file.
- Add this path to the PATH environment variable:
setx /M PATH = Path + "%PATH%;C:\Program Files\NVIDIA\CUDNN\v8.5\bin" - Download ZLib from WinImage (http://www.winimage.com/zLibDll/zlib123dllx64.zip) and extract into a folder. Since it's being used by CUDnn it might be easier to just extract into the CUDnn folder: "C:\Program Files\NVIDIA\CUDNN\v8.5\zlib
- Add this path to the PATH environment variable:
setx /M PATH "%PATH%;C:\Program Files\NVIDIA\CUDNN\v8.5\zlib\dll_x64"
Notes:
- We have written a script that will be included in the next release that will do the heavy lifting for you (but you'll need to download the files - we can't do that since you need to login)
- When setting the PATH variable, setx will truncate to 1024 chars. Either edit the env var manually, or use this command:
powershell -command "[Environment]::SetEnvironmentVariable('PATH', '%PATH%;C:\Program Files\NVIDIA\CUDNN\v8.5\bin','Machine');
Yes, this is a ridiculous amount of work. We'll be doing our best to do as much of this for you as we can.
cheers
Chris Maunder
modified 16hrs ago.
|
|
|
|
|
I've followed the installation procedure posted yesterday for CUDA to the letter, but am unable to get any of the server modules running on the GPU. I also tried a re-installation of 1.5.6 after installing the CUDA drivers but it didn't help.
I've verified with NVIDIA's documentation that my card is supported, and somewhere along the CUDA installation one of the NVIDIA installers claimed to verify my system was CUDA capable.
Is there a config file somewhere I'm supposed to edit to tell the AI server to use the GPU?
Thanks,
Louis
|
|
|
|
|
You shouldn't need to touch any config files.
Have you verified the drivers are installed correctly? If I go to my nVidia control panel I see:

So driver 516.01 and my CUDA cores recognised. Are you seeing similar?
cheers
Chris Maunder
|
|
|
|
|
Yes, I see CUDA cores. See below.
I did find an issue with the path. For some reason only the second path addition for zlib was showing up from the command prompt. After going into the environment variables in system properties and fiddling with the order of the two path additions (which were both listed there just fine) they both now show up from the command prompt. I'd hoped this would solve it but no luck.

|
|
|
|
|
Looks like you're running old drivers. Can you try updating to the latest? According to nVidia you should actually be OK, but I don't think it will hurt.
Do you also have the nVidia toolkit installed?
cheers
Chris Maunder
|
|
|
|
|
I'm on the latest drivers offered for my card, downloaded direct from NVIDIA and installed today. It's an old card (GTX760). I've also verified in the NVIDIA documentation that I should be fine on this driver release.
Yes, the toolkit has been installed as well.
|
|
|
|
|
I have an awful feeling that your card isn't supported by the version of Torch we use (1.10). Your card has compute compatibility 3.0 which may no longer be supported.
I've dug around a bit but all I'm seeing is anecdotal evidence of this so far. Maybe someone else knows more.
cheers
Chris Maunder
|
|
|
|
|
Chris you are correct, below is a good link to lookup CUDA version compatibility.
CUDA - Wikipedia [^]
|
|
|
|
|
Mike, you are a legend. Thanks for that. I was looking all over and yet managed to miss Wikipedia
This will help us tune our installer.
cheers
Chris Maunder
|
|
|
|
|
Bummer. My BlueIris server is pushing ten years old with an ancient video card for a reason; I don't need anything more than that to perform the function and these were parts I had on hand already. My fossil GTX760 does all I need for video conversion for BlueIris. I'm betting there will be many using older hardware for the same reason.
Fortunately in my case I'm not missing out on much by not being able to use my GPU as I have plenty of free CPU and my BlueIris installation isn't particularly busy.
|
|
|
|
|
Does your CPU have an embedded Intel GPU? If so, that's on the roadmap for support for our Python modules. Our .NET objectdetector already supports it
cheers
Chris Maunder
|
|
|
|
|
It does, though I presently have it disabled. Good to know that's coming.
|
|
|
|
|
I didn't realise you could even disable that thing. I'm curious as to why one would disable it? To stop if interfering with (or being chosen preferentially to) another card?
cheers
Chris Maunder
|
|
|
|
|
Chris,
Are you saying that the Object Detection Net module supports Intel GPUs. I have CodeProject.AI Server installed on a laptop that has an integrated Intel UHD 620 GPU. On the dashboard it shows only the CPU is being used and when running the benchmark the CPU load goes to 100% and no change to the GPU load.
Thanks
Mike
|
|
|
|
|
Your drive is old below are the compatible CUDA drivers

|
|
|
|
|
I'm on 473.81 and just need something newer than 452.39. I should be OK here.
|
|
|
|
|
I've been playing around for the past couple of months with project.ai (i'm a lurker) and the experience has been pretty great thus far. Thanks for the hard work in getting that GPU support in as well, as a now (soon to be) former user of DS this is definitely a breath of fresh air. (Didn't see anything, but if there's a way to make a donation to the devs let me know)
First Question. For the custom models, while I have not been able to get them to work through blue iris even reading some tips and tricks listed in another thread, are we able to just drag and drop new .pt files into the folder, and they are auto recognized? Or does one need to annotate in the config files to be able to use them? I have custom .pt files, that would be different than the included files. <<--For example one provided is "ipcam-animals", and I have my own called "animals"
Second question. Being Linux and Docker are supported, is it known/tested if this project will work on a raspberry pi (with possibly including an Intel Neural Compute Stick 2)? I have DS loaded in that config that assists in object detection for a robot project, however the performance is less than desirable. My other tried option was passing images from robo cam over to a networked computer running DS as well, but being a wireless project I'd prefer not to go that route either.
|
|
|
|
|
The answer for the first question, all you need to do is drop the pt file in the below folder. If you want to share the images you used to train your animals model I will add them to the next time I train the ipcam-animals model.


|
|
|
|
|
#2: Won't run on a Pi here.
>64
Some days the dragon wins. Suck it up.
|
|
|
|
|
We have discussed the possibility of a future release for constrained environments (like RPis), but it's not on the immediate roadmap. CodeProject.AI is designed to work as a network service, so all of the devices on your network can make use of it, including networked RaspberryPis.
|
|
|
|
|
For what it is worth.
Methinks it is our environment (the entry/doorclone camera is connected to BI via NVR) and GPU is not available here. Timeout is far too long. Will do testing on better environment.
3 8/11/2022 8:32:18.103 AM Cam6 MOTION_AG
3 8/11/2022 8:32:57.906 AM doorclone MOTION_AG
3 8/11/2022 8:32:57.916 AM entry MOTION_AG
1 8/11/2022 8:33:15.243 AM doorclone AI: timeout
0 8/11/2022 8:33:15.243 AM doorclone AI: Alert cancelled [AI: timeout] 16229ms
3 8/11/2022 8:33:22.486 AM downofc MOTION_AG
3 8/11/2022 8:33:32.748 AM downofc MOTION_AG
0 8/11/2022 8:33:33.975 AM App AI has been restarted
Sometimes OK but times are running in the 500ms range when it does work.
Cam6 and downofc are not using AI.
doorclone is clone of entry (entry is not using AI).
Many alert clips are missing a second or so from start of the recording. Again, I am blaming the environment although we do not see the same without AI, those recordings are good.
Perhaps it will be useful at night when activity is low and headlights are a problem.
Again, I thank you for all the effort.
>64
Some days the dragon wins. Suck it up.
|
|
|
|
|
I've got the latest GPU version installed and for some reason, everytime blueiris sends an alert over, it crashes codeproject AI ...
Running 1.5.6-Beta.
When I access localhost:5000 it shows "Vision Object Detection" is not enabled and shows up as "CPU". The rest of the items are enabled and show "GPU".
I've tried uninstalling the previous versions of CUDA and CUDNN etc.. and re-downloaded and installed the latest versions..
Note: deepstack GPU version was running fine.
|
|
|
|
|
Thanks for the report.
Can you send us a little more info on what you're seeing?
Vision Object Detection is not enabled because we enable the CodeProject Object Detector instead. Vision is python based and comes from the YOLOv5 repo, whereas the CodeProject version is .NET and runs slightly faster.
I'm adding notes on specifics of installing CUDA, but just quickly the biggest requirement is that you have CUDA 11.X installed (we have 11.7) and CUDnn
There are log files in C:\Program Files\CodeProject\AI\logs . Is there anything in there that looks interesting enough to email to use (chris@codeproject.com is easiest)
We've seen an instance of the server crashing on Linux and are looking into that now, but that doesn't seem to be something that would affect your Windows install.
cheers
Chris Maunder
|
|
|
|
|
'I'm adding notes on specifics of installing CUDA'
Please share a Step-by-Step install when possible ..
Thanks,
Cj
|
|
|
|
|