|
I was getting that same error. I wish I could remember exactly what I did to fix it, or if it just stopped happening when I shifted over to the docker. Sorry, not much help- if I think of it I'll return.
One thing that does stick out in my head, the Medium model didn't seem to be stable for me. It would work well for a bit, and then it would start kicking out results at around 1000+ ms. I think it would sh*t the bed eventually. Anecdotal experience, I know. It seems like it's working fine for others.
Frigate is basically an NVR that works extremely well for object detection and events, but doesn't, and may never have a classic NVR interface for scrubbing through footage. It's a fantastic project. I go back and forth mainly because I can't stand having my NVR on Windows. It's hard to compete with all the bells and whistles Blue Iris offers, though.
I held on to Deepstack for quite awhile as well. It's nice to have all these options.
|
|
|
|
|
I've managed to overcome the stability issues, which were definitely a problem. I'm sure the explicit restarts that were added have helped, albeit pretty hacky. But even with these issues resolved the coral tpu and models themselves are the cause of poor detection, not code project. So the fact it is "new" isn't too relevant as it's interfacing to something relatively mature. I don't really see how using docker would improve detection rate given the ai processing is done in the tpu+models, which is independent of whatever OS is used.
That said, code project could use better models or look into running yolo via the tpu (see my other post). But we haven't got any kind of roadmap for this, so I wouldn't be using or investing into coral tpu if you actually need decent detection.
|
|
|
|
|
Hi guys,
Trying to get CPAI running in a container on proxmox - whenever I test detection all the images are just queued.
Tried Debian and Ubuntu, host and bridge network connections.
Any ideas? Thanks
8:40:09:Object Detection (YOLOv5 .NET): Object Detection (YOLOv5 .NET) module started.<br />
08:40:11:Server: This is the latest version<br />
08:40:11:Current Version is 2.1.11-Beta<br />
08:40:40:Client request 'list-custom' in queue 'objectdetection_queue' (...bf51d5)<br />
08:40:40:Client request 'list-custom' in queue 'objectdetection_queue' (...a3ea6f)<br />
08:40:55:Client request 'custom' in queue 'objectdetection_queue' (...88ef32)<br />
08:41:02:Client request 'detect' in queue 'objectdetection_queue' (...59f4f8)
08:40:08:Module 'Object Detection (YOLOv5 .NET)' (ID: ObjectDetectionNet)<br />
08:40:08:Module Path: /app/preinstalled-modules/ObjectDetectionNet<br />
08:40:08:AutoStart: True<br />
08:40:08:Queue: objectdetection_queue<br />
08:40:08:Platforms: windows,linux,linux-arm64,macos,macos-arm64<br />
08:40:08:GPU: Support enabled<br />
08:40:08:Parallelism: 0<br />
08:40:08:Accelerator:<br />
08:40:08:Half Precis.: enable<br />
08:40:08:Runtime: dotnet<br />
08:40:08:Runtime Loc: Shared<br />
08:40:08:FilePath: ObjectDetectionNet.dll<br />
08:40:08:Pre installed: True<br />
08:40:08:Start pause: 1 sec<br />
08:40:08:LogVerbosity:<br />
08:40:08:Valid: True<br />
08:40:08:Environment Variables<br />
08:40:08:CUSTOM_MODELS_DIR = %CURRENT_MODULE_PATH%\custom-models<br />
08:40:08:MODELS_DIR = %CURRENT_MODULE_PATH%\assets<br />
08:40:08:MODEL_SIZE = MEDIUM
Server version: 2.1.11-Beta<br />
Operating System: Linux (Linux 6.2.16-10-pve #1 SMP PREEMPT_DYNAMIC PMX 6.2.16-10 (2023-08-18T11:42Z))<br />
CPUs: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz (Intel)<br />
1 CPU x 4 cores. 8 logical processors (x64)<br />
System RAM: 31 GiB<br />
Target: Linux<br />
BuildConfig: Release<br />
Execution Env: Docker<br />
Runtime Env: Production<br />
.NET framework: .NET 7.0.10<br />
System GPU info:<br />
GPU 3D Usage 0%<br />
GPU RAM Usage 0<br />
Video adapter info:<br />
Global Environment variables:<br />
CPAI_APPROOTPATH = /app<br />
CPAI_PORT = 32168
|
|
|
|
|
I saw CUDA is supported, but CUDA means nVidia GPU.
In the roadmap, there is a plan for "GPU support across a range of graphics cards and accelerators".
Do you plan to use OpenCL for supporting a range of accelerators ?
|
|
|
|
|
The Object Detection (YOLOv5 .NET) module uses DirectML. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.
|
|
|
|
|
Thanks for the information
|
|
|
|
|
I'm looking for volunteers willing to test and run a new and very basic installer one for Ubuntu (x64 or arm64) or macOS (Intel and Apple Silicon).
It will involve
- Ensuring you don't have CodeProject.AI Server already running
- Downloading the installer
- Double clicking (macOS) or
sudo dpkg -i ... for Ubuntu users - launching the service for Ubuntu "(systemd not quite working)
- Launching http://locahost:32168
- Watching and waiting and then testing
There are known issues such as the aforementioned systemd issue, but also a couple of issues with modules not installing properly that I'm trying to nail down this afternoon, but if not, I'm hoping the collectively mind can help out here.
Anyone who has the time, please let us know.
cheers
Chris Maunder
|
|
|
|
|
I have 2304 in a VM, could test on that.
>64
Some days the dragon wins. Suck it up.
|
|
|
|
|
I have an M1 mac. I'd be happy to do it.
|
|
|
|
|
Can also fire up an Ubuntu VM, just let me know.
|
|
|
|
|
I have an M1 Mac available, and could spin up an Ubuntu VM as well (via either Proxmox or Debian).
What kind of ARM64 devices are you looking for? I have an Odroid-N2+ running Armbian 23.05.1 Bookworm (kernel ver: 6.1.30-meson64)
Happy to help when I can.
|
|
|
|
|
Impacting or no?
2023-09-01 23:54:55: FaceProcessing: ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
2023-09-01 23:54:55: FaceProcessing: botocore 1.29.117 requires urllib3<1.27,>=1.25.4, but you have urllib3 2.0.4 which is incompatible.
reported as not impacting, so maybe the top one is not either?
2023-09-02 01:27:58: TrainingYoloV5: ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
2023-09-02 01:27:58: TrainingYoloV5: botocore 1.31.40 requires urllib3<1.27,>=1.25.4, but you have urllib3 2.0.4 which is incompatible.
2023-09-02 01:27:58: TrainingYoloV5: google-auth 2.22.0 requires urllib3<2.0, but you have urllib3 2.0.4 which is incompatible.
2023-09-02 01:27:58: TrainingYoloV5: [0;39m[49mInstalling Packages into Virtual Environment...[0m[0;92m[49mSuccess[0m
2023-09-02 01:27:58: TrainingYoloV5: [0;97m[42mModule setup complete [0m
2023-09-02 01:27:58: Module TrainingYoloV5 installed successfully.
whenever a new module is installed or upgraded, there is this exit code 0, which means success, but the UI says this is an unknown error code.
2023-09-02 01:27:58: Installer exited with code 0
2023-09-02 01:27:59: Module TrainingYoloV5 started successfully.
Error in Install SuperResolution. Unknown response from server
|
|
|
|
|
|
Hi,
I've put a Coral TPU (PCIe) in my server, passed it to the Codeproject AI VM and run the docker in "priviledged: true" so it has access to all devices on the VM
Coral is available in the Ubuntu VM:
02:01.0 SATA controller: VMware SATA AHCI controller
03:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
0b:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU
I've installed the Coral TPU module in the AI server via the web interface.
However, on the status page it still says: Not enabled
(That's the point where I look how to insert an image here)
So, it shows:
Face processing: Started
Object Detection (YOLOv5 .NET): Not enabled
Object Detection (YOLOv5 6.2): Started
ObjectDetection (Coral): Not enabled
How can I switch to use the Coral now?
My docker-compose:
version: "3.3"
services:
deepstack:
image: codeproject/ai-server:latest
restart: unless-stopped
container_name: senseai
privileged: true
ports:
- "80:5000"
environment:
- VISION-SCENE=True
- VISION-FACE=True
- VISION-DETECTION=True
- CUDA_MODE=False
- MODE=Medium
- PROFILE=desktop_cpu
- Modules:TextSummary:Activate=False
# - MODELS_DIR=/usr/share/CodeProject/SenseAI/models
volumes:
- /opt/senseai/data:/usr/share/CodeProject/SenseAI:rw
|
|
|
|
|
Hello,
I've been running Coral TPU with docker for a few weeks now with no problems...
try adding these lines to your docker-compose:
devices:
- /dev/bus/usb:/dev/bus/usb
|
|
|
|
|
I have a PCIe coral (A+E), do you also now the line for this type of device? If you use the correct line, should the coral than be visible on the "System Info" page on code project AI?
I've added the following device in unraid:
/dev/apex_0
I also use this device for "Frigate" there it is working with this naming.
If I enable the corale module in CodeProjectAI, then I get the following error:
15:11:06:objectdetection_coral_adapter.py: Traceback (most recent call last):
15:11:06:objectdetection_coral_adapter.py: File "/app/modules/ObjectDetectionCoral/objectdetection_coral_adapter.py", line 9, in
15:11:06:objectdetection_coral_adapter.py: from request_data import RequestData
15:11:06:objectdetection_coral_adapter.py: File "/app/modules/ObjectDetectionCoral/../../SDK/Python/request_data.py", line 8, in
15:11:06:objectdetection_coral_adapter.py: from PIL import Image
15:11:06:objectdetection_coral_adapter.py: ModuleNotFoundError: No module named 'PIL'
modified 4-Sep-23 9:22am.
|
|
|
|
|
I have the exact same error message using a Google Coral TPU (USB) and AgentDVR in a docker container. As long as GoogleCoral Module is the only object detection, AgentDVR logs, that the AI Server is down.
Docker compose file looks like this:
CodeProjectAI:
image: codeproject/ai-server:gpu
container_name: "CodeProjectAI"
devices:
- /dev/bus/usb:/dev/bus/usb #Google Coral
privileged: true
ports:
- "32168:32168"
environment:
- TZ=Europe/Berlin
volumes:
- XXX:/etc/codeproject/ai
- XXX:/app/modules
restart: unless-stopped
The install.log from GoogleCoral Module contains the following lines:
2023-09-05 10:19:42: Installing setuptools... Done
2023-09-05 10:19:42: Choosing packages from requirements.linux.txt
2023-09-05 10:19:42: /app/modules/ObjectDetectionCoral/bin/linux/python39/venv/bin/python3.9: No module named pip
2023-09-05 10:19:42: Installing Packages into Virtual Environment... Success
2023-09-05 10:19:42: Checking for CUDA...Not found
2023-09-05 10:19:43: Ensuring PIP is installed... Done
2023-09-05 10:19:43: Updating PIP... Done
2023-09-05 10:19:44: Installing setuptools... Done
2023-09-05 10:19:44: Choosing packages from requirements.txt
2023-09-05 10:19:44: /app/modules/ObjectDetectionCoral/bin/linux/python39/venv/bin/python3.9: No module named pip
2023-09-05 10:19:44: Installing Packages into Virtual Environment... Success
2023-09-05 10:19:46: Ensuring curl is installed (just in case)... Done
2023-09-05 10:19:46: deb https:
2023-09-05 10:19:47: Downloading signing keys... Done
2023-09-05 10:19:47: Installing signing keys... Done
2023-09-05 10:19:52: Installing libedgetpu1-std (the non-desk-melting version of libedgetpu1)... Done
2023-09-05 10:19:57: Downloading MobileNet models...Expanding... Done.
2023-09-05 10:19:58: Module setup complete
Installer exited with code 0
Any idea?
|
|
|
|
|
This looks normal. Try shutting down AgentDVR first, then start CPAI. I wasn't able to share my Coral between Frigate and CPAI.
|
|
|
|
|
|
Dear Matthew!
Thanks for the feedback. What about the mentioned errors we receive? This prevents the Coral Module to start.
|
|
|
|
|
There is an issue in the docker image for cpAI 2.1.11 with digest 5eb27f0a3f8b preventing the correct installation of the Coral Objectdetection module.
The issue is caused by the fact that python3.9-venv is not present in the docker image.
Attaching to the running container and installing it with
apt-get install python3.9-venv
before installing the Coral Objectdetection module resolves the issue, and allows the module to install correctly.
|
|
|
|
|
I don't recall having to do that. I did have to shut Frigate down before starting CPAI, though. I wasn't able to share my TPU between the two containers.
|
|
|
|
|
I don't think you're utilizing your TPU. It should say "GPU(TPU)" rather than "CPU." I just tried running both YOLOv5 6.2 (CPU) and Coral at the same time and had no issues.
Edit: I also tried with YOLOv5 .NET with no problems.
Edit 2: I do agree that you should disable both YOLOv5's if you're wanting to utilize the Coral, though! Otherwise it'll just cycle thorugh the different modules.
modified 10-Sep-23 12:11pm.
|
|
|
|
|
I did this deliberately. This happened to me once when I forgot to restart the service.
16:59:00:Object Detection (YOLOv5 6.2): Queue request for Object Detection (YOLOv5 6.2) command 'custom' (...6a60f3) took 13ms
16:59:41:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
16:59:41:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 700, in forward
x = torch.from_numpy(x).to(p.device).type_as(p) / 255 # uint8 to fp16/32
RuntimeError: CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYolo\detect.py", line 162, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 700, in forward
x = torch.from_numpy(x).to(p.device).type_as(p) / 255 # uint8 to fp16/32
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 162, in __exit__
self.dt = self.time() - self.start # delta-time
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 167, in time
torch.cuda.synchronize()
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\cuda\__init__.py", line 493, in synchronize
return torch._C._cuda_synchronize()
RuntimeError: CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
16:59:41:Object Detection (YOLOv5 6.2): Queue request for Object Detection (YOLOv5 6.2) command 'custom' (...e45eeb) took 64ms
16:59:43:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
16:59:43:Object Detection (YOLOv5 6.2): Queue request for Object Detection (YOLOv5 6.2) command 'custom' (...b6927c) took 3ms
16:59:43:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYolo\detect.py", line 162, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 669, in forward
with dt[0]:
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 158, in __enter__
self.start = self.time()
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 167, in time
torch.cuda.synchronize()
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\cuda\__init__.py", line 493, in synchronize
return torch._C._cuda_synchronize()
RuntimeError: CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
16:59:43:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
16:59:43:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYolo\detect.py", line 162, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 669, in forward
with dt[0]:
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 158, in __enter__
self.start = self.time()
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 167, in time
torch.cuda.synchronize()
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\cuda\__init__.py", line 493, in synchronize
return torch._C._cuda_synchronize()
RuntimeError: CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
16:59:43:Object Detection (YOLOv5 6.2): Queue request for Object Detection (YOLOv5 6.2) command 'custom' (...b580d4) took 3ms
16:59:44:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
16:59:44:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYolo\detect.py", line 162, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 669, in forward
with dt[0]:
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 158, in __enter__
self.start = self.time()
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 167, in time
torch.cuda.synchronize()
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\cuda\__init__.py", line 493, in synchronize
return torch._C._cuda_synchronize()
RuntimeError: CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
16:59:44:Object Detection (YOLOv5 6.2): Queue request for Object Detection (YOLOv5 6.2) command 'custom' (...d24bb4) took 31ms
16:59:44:Object Detection (YOLOv5 6.2): Detecting using ipcam-combined
16:59:44:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYolo\detect.py", line 162, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 669, in forward
with dt[0]:
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 158, in __enter__
self.start = self.time()
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 167, in time
torch.cuda.synchronize()
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\cuda\__init__.py", line 493, in synchronize
return torch._C._cuda_synchronize()
RuntimeError: CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
16:59:44:Object Detection (YOLOv5 6.2): Queue request for Object Detection (YOLOv5 6.2) command 'custom' (...7f1f37) took 4ms
16:59:44:Object Detection (YOLOv5 6.2): Detecting using license-plate
16:59:44:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYolo\detect.py", line 162, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 669, in forward
with dt[0]:
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 158, in __enter__
self.start = self.time()
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 167, in time
torch.cuda.synchronize()
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\cuda\__init__.py", line 493, in synchronize
return torch._C._cuda_synchronize()
RuntimeError: CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
16:59:44:Object Detection (YOLOv5 6.2): Queue request for Object Detection (YOLOv5 6.2) command 'custom' (...70915a) took 3ms
|
|
|
|
|
So pre-driver update it was fine, post it was not. Is that correct?
Pre-update: did you do a nvidia-smi and nvcc --version ?
Post-update: can you please do a nvidia-smi and nvcc --version ?
Also, do you recall the pre and post driver versions?
cheers
Chris Maunder
|
|
|
|
|