|
|
My set up was CPAI on unraid as a container.
Windows on a sff dedicated pc (lenovo thinkcenter i5-3470T)
GPU GTX970 on unraid passed through to the container.
I had a few coral devices, A+E, M2 wifi and usb. Used on frigate
Tried the tpu on cpai last year and it failed miserably.
I have now a set up utilising the tpu with cpai and blueiris very well.
1- windows 10 X-Lite on the i5-3470T. 8GB ram. sata SSD
2- CPAI container on unraid (this can be CPAI on the windows pc also) - my sff has an A+E slot so may experiment.
3- Coral tpu m2 wifi on the asrock B550m board that hosts unraid. Used for my NAS.
4- AITool on the windows pc
5- BI Alert Tab - Confirm Alert with AI UNticked.
6- BI Record Tab - save jpegs to local folder on windows pc
7- AIool server pointing to unraid CPAI
8- AItool folder from the BI path for record jpegs
CPAI_CORAL_MODEL_NAME = YOLOv8
Size: Medium
Inference speeds: 8.6ms
round trips: 15-22ms
All sent to telegram and MQTT for nodered.
Removed my old GPU to reduce power consumption.
So far so good.
AITool was necessary here to finely tune each object (% image size etc)
Hope this helps someone struggling and wants to use a tpu.
12 cameras - 6 recording 24/7
CPU utilizes just 10-12%!
modified 23-Apr-24 12:49pm.
|
|
|
|
|
For what it’s worth, that sounds like it’s running YOLOv8 medium a bit too fast, so there may be a different model loaded up than expected. But if the results are good, no one’s gonna complain! Also, you should be able to plug in all of the TPUs and it’ll use them all simultaneously!
|
|
|
|
|
That's what I thought. It may well be using the MobileNet SSD Medium but the info says yolov8. Maybe a bit buggy still. Anyways it works and works well now.
I do have the USB tpu plugged in but I have that hosted for my frigate as a back up.
But its switched off.
May well play around with the multi-tpu. Thanks for the tip.
It is a bit unstable when you try to change settings, had to reinstall the module a couple of times.
|
|
|
|
|
Yeah. The downside of the USB one is that it may make the whole system less stable. Hard to say without trying it out, though. YOLOv8 medium works better on two than one, though, so I’d suggest giving it a try.
|
|
|
|
|
Nope it didn't play ball at all.
Added both devices to the container
/dev/apex_0
/dev/usb
Failed to work - even after reinstalling everything.
/dev/usb works on its own but it twice as slow but shows multi-tpu which is odd.
Removed that and readded /dev/apex_0 which as the fastest but would not work with the usb unfortunately.
Possibly if your board had slots for 2 pcie/wifi cards that might work but it did not like the usb and m2 adding to the container paths.
|
|
|
|
|
Huh. Weird. I see no reason why that wouldn’t work. I should just buy my own to test it. I currently have two M2 slots filled and three PCIe slots with dual TPU adapters.
|
|
|
|
|
Let me know how it goes if you test with multiple tpus.
May work if i try them on the dedicated windows pc directly rather than on unraid trying to pass them through to the container.
If I get time I'll try them on the windows (it has an A+E slot).
|
|
|
|
|
I have eight TPUs in my machine, just none of them USB. So I know it should work.
|
|
|
|
|
8??? omg. how does the multi-mode work then?
|
|
|
|
|
|
Just a thought I wonder if the mesh would work like this. tpu on 1 instance of cpai and another instance of cpai with the 2nd tpu.
|
|
|
|
|
The pipelining between model segments will be more efficient if they are in the same machine.
|
|
|
|
|
ah nice - did not know unraid had the option in settings of running a 2nd instance of a container!
|
|
|
|
|
Wasn't able to get a 2nd instance running with a different port 32169 for example. It span up but somehow was linked to 32168 each time - even after making sure all the host ports were 32169 udp/tcp and web gui. unique name etc. But had no success
|
|
|
|
|
Set up a 2nd instance on the windows pc with the coral usb.
And 2 servers added on AITool so they share the 2 servers. Seems to work well and share the load.
|
|
|
|
|
I still think you should have better performance with one instance of CPAI running two TPUs, but if that doesn’t work, then I’m glad you got this working!
|
|
|
|
|
Some interesting results testing the tiny, small, medium and large MobileNet SSD with the same picture.
The small model found far more objects that all the other models even though some were wrong!
So going with the small right now and see how it goes with the filters on the AITool. May pick up the smaller objects that the medium and large miss or filter them out
|
|
|
|
|
My question :
Is it possible to have this configuration and the system work correctly ?
Because my GPU is every time at 0% usage and I have test lot of driver, cuda and codeproject ai version and event my GPU is at 0% usage...
I don't know if it's possible to have more information in the docs or idkb because the project have good evolution but even update never work correcly with my instance....
I'm on docker linux system.
Thank you a lot for next answer
What I have tried:
Test a lot of driver, cuda and codeproject ai (since 1 year)
|
|
|
|
|
Have you been able to run any version of CodeProject.AI Server? Also, could you please share your System Info tab from your CodeProject.AI Server dashboard, if you're able to load it, and share any install logs you have for modules where you were trying to use GPU?
Historically, a lot of users have had trouble getting the GPU to work for the GTX 1650. Half-precision should definitely be disabled. I did see one user report that 2.5.6 worked for them on Windows.
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
|
Thanks very much for the further info. If GPU is displayed on the CodeProject.AI Server dashboard next to the YOLOv8 module, it means the torch libraries are reporting GPU is present and will be used.
What tool/program are you using to monitor the GPU usage?
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
When I access the server via the IP address at port 32168, the server shows "Online"; however, when accessed through Cloudflare tunnel, the server displays "Searching" and then "Offline".
The CodeProject.AI server is running in my Docker on my NAS.
Any idea what could be the problem?
-- modified 20-Apr-24 3:18am.
|
|
|
|
|
Apologies, this is Cloudflare Tunnel setup question, and I am unfamiliar with it.
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
Inside of blueiris, I can view video and 'test with AI' where it puts a box around people, cars, etc and labels them. For some reason I am unable to get this feedback in BI when using Object Detection (YOLOv8) 1.4.2.
I've tested a bunch of object detection versions and I find YOLOv8 the best but I have no way to tell if it's actually running correctly.
When I run Object Detection (YOLOv5 .NET) 1.10.1 I am able to review footage 'testing with AI' and the box comes up around whatever is detected.
Is there a way to fix YOLOv8?
This is from the logs:
13:25:54:Response rec'd from Object Detection (YOLOv8) command 'detect' (...e47814)
13:25:54:Object Detection (YOLOv8): Unable to create YOLO detector for model yolov8m
13:26:12:Object Detection (YOLOv8): C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv8\assets\yolov8m.pt does not exist
13:26:12:Response rec'd from Object Detection (YOLOv8) command 'detect' (...e16bc8)
13:26:12:Object Detection (YOLOv8): Unable to create YOLO detector for model yolov8m
13:26:12:Object Detection (YOLOv8): C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv8\assets\yolov8m.pt does not exist
modified 23-Apr-24 13:29pm.
|
|
|
|