|
Hey,
My network provider (Telekom, quite popular provider in Germany) seems to have issues with the code project hosting. Downloading the installer was very slow at about 50 KB per second with many failed attempts. I managed to get the installer using my mobile plan. However, the connection problem now stops me from installing modules (as I cannot use my mobile-connection on the server), which makes it impossible to use object detection.
I've attached an excerpt from the server logs showing the issue when trying to download the module.
The target folder for the download does not contain any files.
22:47:26:ObjectDetectionYOLOv5Net: Reading ObjectDetectionYOLOv5Net settingsUsed modulesettings.json to get value Object Detection (YOLOv5 .NET)
22:47:26:ObjectDetectionYOLOv5Net: .Used modulesettings.json to get value 1.9.3
22:47:27:ObjectDetectionYOLOv5Net: .Used modulesettings.json to get value dotnet
22:47:27:ObjectDetectionYOLOv5Net: .Used modulesettings.json to get value Shared
22:47:27:ObjectDetectionYOLOv5Net: .Used modulesettings.windows.json to get value bin\ObjectDetectionYOLOv5Net.exe
22:47:27:ObjectDetectionYOLOv5Net: .Used modulesettings.json to get value true
22:47:28:ObjectDetectionYOLOv5Net: .Used modulesettings.json to get value ["all"]
22:47:28:ObjectDetectionYOLOv5Net: .Done
22:47:28:ObjectDetectionYOLOv5Net: Installing module Object Detection (YOLOv5 .NET) 1.9.3
22:47:28:ObjectDetectionYOLOv5Net: Variable Dump
22:47:28:ObjectDetectionYOLOv5Net: moduleName = Object Detection (YOLOv5 .NET)
22:47:28:ObjectDetectionYOLOv5Net: moduleVersion = 1.9.3
22:47:28:ObjectDetectionYOLOv5Net: runtime = dotnet
22:47:28:ObjectDetectionYOLOv5Net: runtimeLocation = Shared
22:47:28:ObjectDetectionYOLOv5Net: installGPU = true
22:47:28:ObjectDetectionYOLOv5Net: pythonVersion =
22:47:28:ObjectDetectionYOLOv5Net: virtualEnvDirPath =
22:47:28:ObjectDetectionYOLOv5Net: venvPythonCmdPath =
22:47:28:ObjectDetectionYOLOv5Net: packagesDirPath =
22:47:28:ObjectDetectionYOLOv5Net: moduleStartFilePath = bin\ObjectDetectionYOLOv5Net.exe
22:47:28:ObjectDetectionYOLOv5Net: Downloading ObjectDetectionYOLOv5Net-DirectML-1.9.3.zip to C:\Program Files\CodeProject\AI\downloads\ObjectDetectionYOLOv5Net\bin
22:47:28:ObjectDetectionYOLOv5Net: Downloading ObjectDetectionYOLOv5Net-DirectML-1.9.3.zip...Checking 'C:\Program Files\CodeProject\AI\downloads\ObjectDetectionYOLOv5Net\ObjectDetectionYOLOv5Net-DirectML-1.9.3.zip'
22:47:54:Response timeout. Try increasing the timeout value
Is this a known issue? Are there any alternative download links I could use to obtain the module?
Best
|
|
|
|
|
|
Is there any chance you could do a tracert to www.codeproject.com and email me the results? (chris@codeproject.com)
cheers
Chris Maunder
|
|
|
|
|
Done
|
|
|
|
|
Medium on Coral using CPAI used to take like ~114ms on the maximum performance driver. But after updating, it's taking ~14ms per detection, and checking from Blue Iris, the detections are way worse now than before.
I tried Large but that's like ~240ms, so it's too slow. What happened to Medium?
(btw, Coral is always "Lost Contact" now)
|
|
|
|
|
I had the same experience. I have an M.2 single TPU but after the update it was set to Multi-TPU. I disabled Multi-TPU Support and everything is running much better. More accurate detection and about the speed it was before.
|
|
|
|
|
I tried disabling multi-TPU but it's still the same. Seems like Small and Medium are exactly the same now for some reason
|
|
|
|
|
Are you seeing "Lost Contact" too? If so, can you try Ctrl+F5?
I'm trying to replicate the issue but it's stubbornly refusing to lose contact for me. What's your system info tab say?
cheers
Chris Maunder
|
|
|
|
|
Mine's also always "Lost Contact". Tried installing from scratch without any change. System info says:
Server version: 2.5.4
System: Docker
Operating System: Linux (Ubuntu 22.04)
CPUs: Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz (Intel)
1 CPU x 4 cores. 8 logical processors (x64)
System RAM: 15 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Docker
Runtime Env: Production
.NET framework: .NET 7.0.16
Default Python: 3.10
Go Version:
Video adapter info:
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
modified 17-Feb-24 22:12pm.
|
|
|
|
|
Wait, CPAI works on M2 Coral now ? Mine is dual, can it work ?
|
|
|
|
|
Can you please do a Ctrl+F5 on your browser just to ensure there are no browser cache issues, and then click the "info" button on the Coral status row, click the "copy" icon at the top right and paste that here?
cheers
Chris Maunder
|
|
|
|
|
Sure, refreshed without caches (still says "Lost Contact" though). Btw I've re-installed the Coral driver using the "max" driver inside the Docker container, as per your instructions from another thread. Here's the Coral info (the Copy icon, the little clipboard, doesn't actually work so I just copied the Info manually):
Module 'Object Detection (Coral)' 2.1.3 (ID: ObjectDetectionCoral)
Valid: True
Module Path: <root>/modules/ObjectDetectionCoral
AutoStart: True
Queue: objectdetection_queue
Runtime: python3.9
Runtime Loc: Local
FilePath: objectdetection_coral_adapter.py
Pre installed: False
Start pause: 1 sec
Parallelism: 1
LogVerbosity:
Platforms: all
GPU Libraries: installed if available
GPU Enabled: enabled
Accelerator:
Half Precis.: enable
Environment Variables
CPAI_CORAL_MODEL_NAME = MobileNet SSD
CPAI_CORAL_MULTI_TPU = False
MODELS_DIR = <root>/modules/ObjectDetectionCoral/assets
MODEL_SIZE = medium
Status Data: {
"inferenceDevice": "TPU",
"inferenceLibrary": "TF-Lite",
"canUseGPU": "false",
"successfulInferences": 42719,
"failedInferences": 0,
"numInferences": 42719,
"averageInferenceMs": 8.372527446803531,
"numItemsFound": 226603,
"histogram": {
"person": 27140,
"cat": 122,
"airplane": 492,
"refrigerator": 244,
"tv": 2206,
"clock": 710,
"chair": 3285,
"bird": 247,
"potted plant": 4865,
"bench": 6661,
"car": 120381,
"bicycle": 33748,
"traffic light": 2659,
"suitcase": 3270,
"dining table": 178,
"handbag": 1422,
"dog": 252,
"bus": 5,
"surfboard": 8,
"truck": 240,
"sink": 19,
"motorcycle": 58,
"toilet": 3368,
"bottle": 3657,
"bowl": 3949,
"skateboard": 1965,
"boat": 10,
"toothbrush": 859,
"cup": 268,
"umbrella": 12,
"cell phone": 225,
"laptop": 281,
"teddy bear": 2,
"remote": 172,
"book": 3253,
"vase": 9,
"wine glass": 2,
"couch": 4,
"backpack": 243,
"cake": 3,
"parking meter": 1,
"frisbee": 3,
"tennis racket": 2,
"spoon": 7,
"cow": 3,
"mouse": 52,
"sports ball": 2,
"fire hydrant": 12,
"knife": 1,
"train": 23,
"oven": 1,
"tie": 1,
"snowboard": 1
}
}
Started: 17 Feb 2024 12:00:15 PM Coordinated Universal Time
LastSeen: 18 Feb 2024 2:44:04 AM Coordinated Universal Time
Status: Started
Requests: 42721 (includes status calls)
Installation Log
2024-02-17 08:11:27: Setting verbosity to quiet
2024-02-17 08:11:27: Hi Docker! We will disable shared python installs for downloaded modules
2024-02-17 08:11:27: No schemas installed
2024-02-17 08:11:27: (No schemas means: we can't detect if you're in light or dark mode)
2024-02-17 08:11:27: Installing CodeProject.AI Analysis Module
2024-02-17 08:11:27: ======================================================================
2024-02-17 08:11:27: CodeProject.AI Installer
2024-02-17 08:11:27: ======================================================================
2024-02-17 08:11:28: 48.04 GiB of 129.02 GiB available on Docker
2024-02-17 08:11:28: Installing xz-utils...
2024-02-17 08:11:28: WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
2024-02-17 08:11:28: WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
2024-02-17 08:11:28: Hit:1 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
2024-02-17 08:11:28: Hit:2 http://archive.ubuntu.com/ubuntu jammy InRelease
2024-02-17 08:11:28: Get:3 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
2024-02-17 08:11:28: Get:4 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]
2024-02-17 08:11:28: Hit:5 https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu jammy InRelease
2024-02-17 08:11:28: Hit:6 http://archive.ubuntu.com/ubuntu jammy-backports InRelease
2024-02-17 08:11:29: Get:7 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [1343 kB]
2024-02-17 08:11:29: General CodeProject.AI setup
2024-02-17 08:11:29: Setting permissions on downloads folder...Done
2024-02-17 08:11:29: Setting permissions on runtimes folder...Done
2024-02-17 08:11:29: Setting permissions on persisted data folder...Done
2024-02-17 08:11:29: GPU support
2024-02-17 08:11:29: CUDA (NVIDIA) Present: No
2024-02-17 08:11:29: ROCm (AMD) Present: No
2024-02-17 08:11:29: MPS (Apple) Present: No
2024-02-17 08:11:29: Reading module settings.......Done
2024-02-17 08:11:29: Processing module ObjectDetectionCoral 2.1.3
2024-02-17 08:11:29: Installing Python 3.9
2024-02-17 08:11:29: Python 3.9 is already installed
2024-02-17 08:11:29: Ensuring PIP in base python install...Get:8 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [1742 kB]
2024-02-17 08:11:29: Fetched 3315 kB in 2s (1862 kB/s)
2024-02-17 08:11:30: Reading package lists...
2024-02-17 08:11:30: Building dependency tree...
2024-02-17 08:11:30: Reading state information...
2024-02-17 08:11:30: 3 packages can be upgraded. Run 'apt list --upgradable' to see them.
2024-02-17 08:11:32: done
2024-02-17 08:11:33: Upgrading PIP in base python install... done
2024-02-17 08:11:33: Installing Virtual Environment tools for Linux...
2024-02-17 08:11:36: Searching for python3-pip python3-setuptools python3.9...All good.
2024-02-17 08:11:39: Creating Virtual Environment (Local)... Done
2024-02-17 08:11:39: Checking for Python 3.9...(Found Python 3.9.18) All good
2024-02-17 08:11:41: Upgrading PIP in virtual environment... done
2024-02-17 08:11:44: Installing updated setuptools in venv... Done
2024-02-17 08:11:45: Searching for gnupg...All good.
2024-02-17 08:11:47: Downloading edge TPU runtime... already exists...Expanding... Done.
2024-02-17 08:11:47: Moving contents of edgetpu_runtime-20221024.zip to edgetpu_runtime...done.
2024-02-17 08:11:47: Using the reduced operating frequency for Coral USB devices.
2024-02-17 08:11:47: Installing Edge TPU runtime library [/usr/lib/x86_64-linux-gnu/libedgetpu.so.1.0]...
2024-02-17 08:11:47: File already exists. Replacing it...
2024-02-17 08:11:47: Done.
2024-02-17 08:11:48: Downloading EfficientDet (large) models... already exists...Expanding... Done.
2024-02-17 08:11:48: Moving contents of objectdetection-efficientdet-large-edgetpu.zip to assets...done.
2024-02-17 08:11:48: Downloading EfficientDet (medium) models... already exists...Expanding... Done.
2024-02-17 08:11:48: Moving contents of objectdetection-efficientdet-medium-edgetpu.zip to assets...done.
2024-02-17 08:11:49: Downloading EfficientDet (small) models... already exists...Expanding... Done.
2024-02-17 08:11:49: Moving contents of objectdetection-efficientdet-small-edgetpu.zip to assets...done.
2024-02-17 08:11:49: Downloading EfficientDet (tiny) models... already exists...Expanding... Done.
2024-02-17 08:11:49: Moving contents of objectdetection-efficientdet-tiny-edgetpu.zip to assets...done.
2024-02-17 08:11:50: Downloading MobileNet (large) models... already exists...Expanding... Done.
2024-02-17 08:11:50: Moving contents of objectdetection-mobilenet-large-edgetpu.zip to assets...done.
2024-02-17 08:11:51: Downloading MobileNet (medium) models... already exists...Expanding... Done.
2024-02-17 08:11:51: Moving contents of objectdetection-mobilenet-medium-edgetpu.zip to assets...done.
2024-02-17 08:11:51: Downloading MobileNet (small) models... already exists...Expanding... Done.
2024-02-17 08:11:51: Moving contents of objectdetection-mobilenet-small-edgetpu.zip to assets...done.
2024-02-17 08:11:51: Downloading MobileNet (tiny) models... already exists...Expanding... Done.
2024-02-17 08:11:51: Moving contents of objectdetection-mobilenet-tiny-edgetpu.zip to assets...done.
2024-02-17 08:11:53: Downloading YOLOv8 (large) models... already exists...Expanding... Done.
2024-02-17 08:11:53: Moving contents of objectdetection-yolov8-large-edgetpu.zip to assets...done.
2024-02-17 08:11:55: Downloading YOLOv8 (medium) models... already exists...Expanding... Done.
2024-02-17 08:11:55: Moving contents of objectdetection-yolov8-medium-edgetpu.zip to assets...done.
2024-02-17 08:11:56: Downloading YOLOv8 (small) models... already exists...Expanding... Done.
2024-02-17 08:11:56: Moving contents of objectdetection-yolov8-small-edgetpu.zip to assets...done.
2024-02-17 08:11:56: Downloading YOLOv8 (tiny) models... already exists...Expanding... Done.
2024-02-17 08:11:56: Moving contents of objectdetection-yolov8-tiny-edgetpu.zip to assets...done.
2024-02-17 08:11:56: Installing Python packages for Object Detection (Coral)
2024-02-17 08:11:56: Installing GPU-enabled libraries: If available
2024-02-17 08:11:57: Searching for python3-pip...All good.
2024-02-17 08:11:59: Ensuring PIP compatibility... Done
2024-02-17 08:11:59: Python packages will be specified by requirements.txt
2024-02-17 08:12:12: - Installing Tensorflow Lite... (not checked) Done
2024-02-17 08:12:19: - Installing PyCoral... (failed check) Done
2024-02-17 08:12:19: - Installing NumPy, the fundamental package for array computing with Python...Already installed
2024-02-17 08:12:19: - Installing Pillow, a Python Image Library...Already installed
2024-02-17 08:12:19: Installing Python packages for the CodeProject.AI Server SDK
2024-02-17 08:12:20: Searching for python3-pip...All good.
2024-02-17 08:12:22: Ensuring PIP compatibility... Done
2024-02-17 08:12:22: Python packages will be specified by requirements.txt
2024-02-17 08:12:22: - Installing Pillow, a Python Image Library...Already installed
2024-02-17 08:12:23: - Installing Charset normalizer... (✅ checked) Done
2024-02-17 08:12:26: - Installing aiohttp, the Async IO HTTP library... (✅ checked) Done
2024-02-17 08:12:28: - Installing aiofiles, the Async IO Files library... (✅ checked) Done
2024-02-17 08:12:29: - Installing py-cpuinfo to allow us to query CPU info... (✅ checked) Done
2024-02-17 08:12:32: - Installing Requests, the HTTP library... (✅ checked) Done
2024-02-17 08:12:36: WARNING: Logging before InitGoogleLogging() is written to STDERR
2024-02-17 08:12:36: I20240217 08:12:36.301470 7544 pipelined_model_runner.cc:172] Thread: 140672648335360 receives empty request
2024-02-17 08:12:36: I20240217 08:12:36.301486 7544 pipelined_model_runner.cc:245] Thread: 140672648335360 is shutting down the pipeline...
2024-02-17 08:12:36: I20240217 08:12:36.301586 7544 pipelined_model_runner.cc:255] Thread: 140672648335360 Pipeline is off.
2024-02-17 08:12:36: I20240217 08:12:36.301597 7590 pipelined_model_runner.cc:207] Queue is empty and `StopWaiters()` is called.
2024-02-17 08:12:36: I20240217 08:12:36.301739 7544 pipelined_model_runner.cc:172] Thread: 140672648335360 receives empty request
2024-02-17 08:12:36: E20240217 08:12:36.301748 7544 pipelined_model_runner.cc:240] Thread: 140672648335360 Pipeline was turned off before.
2024-02-17 08:12:36: I20240217 08:12:36.301790 7544 pipelined_model_runner.cc:207] Queue is empty and `StopWaiters()` is called.
2024-02-17 08:12:36: E20240217 08:12:36.301808 7544 pipelined_model_runner.cc:240] Thread: 140672648335360 Pipeline was turned off before.
2024-02-17 08:12:36: E20240217 08:12:36.301817 7544 pipelined_model_runner.cc:147] Failed to shutdown status: INTERNAL: Pipeline was turned off before.
2024-02-17 08:12:39: I20240217 08:12:39.092777 7544 pipelined_model_runner.cc:172] Thread: 140672648335360 receives empty request
2024-02-17 08:12:39: I20240217 08:12:39.092801 7544 pipelined_model_runner.cc:245] Thread: 140672648335360 is shutting down the pipeline...
2024-02-17 08:12:39: I20240217 08:12:39.092916 7544 pipelined_model_runner.cc:255] Thread: 140672648335360 Pipeline is off.
2024-02-17 08:12:39: I20240217 08:12:39.092931 7612 pipelined_model_runner.cc:207] Queue is empty and `StopWaiters()` is called.
2024-02-17 08:12:39: I20240217 08:12:39.093159 7544 pipelined_model_runner.cc:172] Thread: 140672648335360 receives empty request
2024-02-17 08:12:39: E20240217 08:12:39.093185 7544 pipelined_model_runner.cc:240] Thread: 140672648335360 Pipeline was turned off before.
2024-02-17 08:12:39: I20240217 08:12:39.093247 7544 pipelined_model_runner.cc:207] Queue is empty and `StopWaiters()` is called.
2024-02-17 08:12:39: E20240217 08:12:39.093278 7544 pipelined_model_runner.cc:240] Thread: 140672648335360 Pipeline was turned off before.
2024-02-17 08:12:39: E20240217 08:12:39.093294 7544 pipelined_model_runner.cc:147] Failed to shutdown status: INTERNAL: Pipeline was turned off before.
2024-02-17 08:12:44: Self test: Self-test passed
2024-02-17 08:12:44: Module setup time 00:01:15
2024-02-17 08:12:44: Setup complete
2024-02-17 08:12:44: Total setup time 00:01:17
|
|
|
|
|
Thanks for that.
So the MobileNEt models are fast (as they are designed to be) but not accurate enough. There are EfficientDet models you could try, but you may want to try the YOLO model. We don't have a model chooser setup for Coral yet, so if you're keen you could open a terminal into your Docker container and edit the modulesettings.json file and set the environment variable "CPAI_CORAL_MODEL_NAME" to "yolov8" and then restart the container.
cheers
Chris Maunder
|
|
|
|
|
The old MobileNet Medium was pretty good though. I just thought it was weird that the Tiny, Small, and Medium models ALL take 13ms while Large jumps to 220ms, while previously the Medium was expectedly like 114ms.
I changed it to "yolov8" but didn't see any difference at all in the inference times (still 13ms) or found objects. I then tried "EfficientDet-Lite" and it's still only taking 13ms and is still not very good (but seems to actually give different results from MobileNet and yolov8).
So... not sure what to do. I'm stuck between quick but bad results using the new Medium model (which may or may not be malfunctioning) and good but very slow results using the Large model.
Btw, the modulesettings.json says:
"CPAI_CORAL_MODEL_NAME": "yolov8",
so is that comment on the line saying "YOLOv5" instead of "yolov8" wrong?
Also, the modulesettings.json has the "MODEL_SIZE" variable a couple lines below this one, and changing it has no effect on which model size that's actually used in the UI.
|
|
|
|
|
Did you ever get an answer to this?
I have something similar. I don't know the Coral version but with CPAI 2.5.1 the models were working well. I was using EfficientDet-Lite and medium size and it worked the best for me and took about 100-150 ms. Ever since upgrading it seems to be using MobileNet SSD and Tiny no matter what I configure. I can test it in explorer to determine that. Oddly enough choosing in the Explorer works so there is something with the configuration not working. I also can't seem to get the module to auto start when the PC restarts. I created a post but haven't gotten any feedback.
|
|
|
|
|
I keep seeing these Tracebacks in the log, and every time this traceback occurs Blue Iris throws an Error 500. Only installed modules are face processing, Yolo v5 .net and yolo v5 6.2. The only one enabled however is yolo v5 6.2. Originally had code project at 2.3.4 but recently upgraded it to 2.5.1. Blue Iris is always on whatever the latest critical/stable build is. It should be noted that I run a total of 6 Blue Iris/CodeProject installations and this is the only site that has this problem. The site with the problem has the same hardware as three of the other five sites. CodeProject is installed on the same windows machine/install as BI.
Here is the traceback.
23:30:33:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv5-6.2\detect.py", line 141, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 121, in _forward_once
x = m(x) # run
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 75, in forward
wh = (wh * 2) ** 2 * self.anchor_grid[i] # wh
RuntimeError: The size of tensor a (48) must match the size of tensor b (80) at non-singleton dimension 3
I tried uninstalling, manually deleting Program Files\CodeProject and ProgramData\CodeProject, restarting, then reinstalling. Any ideas?
|
|
|
|
|
What is the difference in hardware between the machines? Can you paste the System info tab of that machine here so we can take a look? I suspect it's a GPU issue. Random thought is to go on the dashboard, head to the 6.2 module's status line, and disable half precision
cheers
Chris Maunder
|
|
|
|
|
The machines are all SFF Dell Optiplex machines using CPU only. I just disabled the half precision will see if it keeps happening.
Server version: 2.5.1
System: Windows
Operating System: Windows (Microsoft Windows 10.0.20348)
CPUs: Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz (Intel)
1 CPU x 4 cores. 4 logical processors (x64)
GPU (Primary): Intel(R) HD Graphics 530 (1,024 MiB) (Intel Corporation)
Driver: 30.0.101.1692
System RAM: 8 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
.NET framework: .NET 7.0.10
Default Python:
Video adapter info:
Intel(R) HD Graphics 530:
Driver Version 30.0.101.1692
Video Processor Intel(R) HD Graphics Family
System GPU info:
GPU 3D Usage 9%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
Disabling half precision also did not resolve the issue.
|
|
|
|
|
I'd recommend switching to the .NET object detector
cheers
Chris Maunder
|
|
|
|
|
It has only been ~12 hours since I made the change but so far so good. Thank you.
|
|
|
|
|
Add module Dashboard HTML so each module can have their own customizable settings. For example, users can access to the below settings.
"PLATE_CONFIDENCE": 0.7,
"PLATE_ROTATE_DEG": 0,
"AUTO_PLATE_ROTATE": true,
"PLATE_RESCALE_FACTOR": 2,
"OCR_OPTIMIZATION": true,
"OCR_OPTIMAL_CHARACTER_HEIGHT": 60,
"OCR_OPTIMAL_CHARACTER_WIDTH": 30
|
|
|
|
|
You can kinda sorta do that with the UIElements section in module settings, but it would be a little dodgy in that you'd have to provide discrete values (eg have 5 menu options for confidence in steps of .2, or 10 steps of character height with steps of 10). What I was planning was offering range selectors and true/false switches for menu items. That would solve this particular use case.
cheers
Chris Maunder
|
|
|
|
|
Thanks Chris, I guess I will be releasing a v3.0.2 soon with these changes
"UIElements" : {
"Menus": [
{
"Label": "Plate Confidence",
"Options": [
{ "Label": "50%", "Setting": "PLATE_CONFIDENCE", "Value": "0.50" },
{ "Label": "55%", "Setting": "PLATE_CONFIDENCE", "Value": "0.55" },
{ "Label": "60%", "Setting": "PLATE_CONFIDENCE", "Value": "0.60" },
{ "Label": "65%", "Setting": "PLATE_CONFIDENCE", "Value": "0.65" },
{ "Label": "70%", "Setting": "PLATE_CONFIDENCE", "Value": "0.70" },
{ "Label": "75%", "Setting": "PLATE_CONFIDENCE", "Value": "0.75" },
{ "Label": "80%", "Setting": "PLATE_CONFIDENCE", "Value": "0.80" }
]
},
{
"Label": "Auto Plate Rotation",
"Options": [
{ "Label": "Enable", "Setting": "AUTO_PLATE_ROTATE", "Value": "true" },
{ "Label": "Disable", "Setting": "AUTO_PLATE_ROTATE", "Value": "false" }
]
},
{
"Label": "OCR Optimization",
"Options": [
{ "Label": "Enable", "Setting": "OCR_OPTIMIZATION", "Value": "true" },
{ "Label": "Disable", "Setting": "OCR_OPTIMIZATION", "Value": "false" }
]
}]
},
|
|
|
|
|
Nice. For now that's your best / easiest option
cheers
Chris Maunder
|
|
|
|
|