|
Question, has anyone try with Tesla P4 and get it works with object detection? I have one, followed the instructions but CodeProject AI is still not using it for some reason. I'm pretty certain that I've followed the instructions. The computer that I have Tesla P4 is running in an air-gapped environment so it is also possible that I downloaded the wrong files...
|
|
|
|
|
You'll need an internet connection for the initial install. If you look at the logs on the dashboard then I would assume, since you're in an air-gapped setup, the install failed and you'll see lots of "cannot find file..."
cheers
Chris Maunder
|
|
|
|
|
Hi Chris,
I understand on the internet connections. However, I believe there is a way to make it works offline somehow, as long as I get the settings and placement of the files all done correctly.
I actually took the script, download the files that are mentioned in the script and either install them or get them into the respective folders. After that, I did tried to run the script and I did not get any error.
What else I could be missing though?
|
|
|
|
|
Member 15876241 wrote: I actually took the script, download the files that are mentioned in the script and either install them or get them into the respective folders
Does that include all the Python packages that get installed by the setup scripts based on the requirements.*.txt files?
cheers
Chris Maunder
|
|
|
|
|
No, that part was done by a computer connected to the internet. The way I did it was by running a CodeProject AI on this internet-connected computer (which is an i5 7th generation, without a dedicated GPU). I initially used this computer to download modules (such as object detection, ALPR, etc.), and then I tested them on the internet computer to ensure they worked correctly. After confirming their functionality, I copied and pasted the modules to a computer in an air-gapped environment. So far, this process has been working without any problems.
This question only came up when I considered adding a Tesla P4 to the computer in the air-gapped environment.
Since I don't have access to the computer with the CodeProject AI, I can't check the requirements.txt file to see if it includes a CUDA dependency, which, now that you mention it, I think there might be. But I could be mistaken.
|
|
|
|
|
Ah, that all makes sense.
When installing the modules, the installer will be aware of the hardware on your machine and install the packages most appropriate. This means your non-GPU machine had non-GPU libraries and packages installed, so when you copy them over your GPU doesn't have the appropriate libraries in place to support the GPU.
An option: move the P4 to your internet connected machine. Run the installer. Take a copy of the files, and move them, and the P4, back to your air-gapped machine.
cheers
Chris Maunder
|
|
|
|
|
Chris, that is a brilliant idea! Unfortunately, my computer with an internet connection did not have an extra PCIe slot for the Tesla P4. So, what I did was to run the virtual environment of Python39 and download all the necessary packages. Then, I transferred them to the computer in the air-gapped environment and ran the installation. It is now utilizing CUDA!
I was planning to use YOLOv8 along with ALPR, but I realized there's no license plate module available for it. I attempted to use the custom models from YOLOv5 6.2, but they seemed incompatible. Therefore, I will be directly using YOLOv5 6.2 for this project.
Thanks for your help!
|
|
|
|
|
I'm glad you got it sorted - well done!
cheers
Chris Maunder
|
|
|
|
|
I run a Tesla P4 without issues (excluding when there's instillation issues that is). Though it's not "air-gapped"... It's on a private vlan with no internet access unless I'm installing/updating things.
|
|
|
|
|
Thank you, Andrew. I believe the difference lies in my approach of manually assembling the program. This means I am individually finding and compiling the files myself, rather than relying on a script to handle the entire process for me.
It's likely that I am missing some files that are essential to the requirements.
However, I'm glad to hear that your Tesla P4 is functioning properly. This clearly indicates that my issue is either incorrect settings or missing necessary files.
|
|
|
|
|
Hi all,
After a failed hard drive in my PowerEdge R230, I've set up a fresh install of Debian Bookworm on my server, setting up everything from scratch. I'm trying to install codeproject.ai using the .deb file. I followed the instructions on Installing CodeProject.AI Server on Linux - CodeProject.AI Server v2.5.0[^]. It installed OK, and I can access the server on 192.168.x.x:5000. However, the modules never installed successfully, always showing the error: "Install failed: Unable to install Python 3.8". I've tried to reinstall the modules following the README to no avail. I've tried uninstalling and reinstalling codeproject.ai, but still hitting the same issue.
Do I need Python installed before installing codeproject.ai? The documentation seems to imply that I don't, so I'd like to double-check.
In the server logs:
Error Error trying to start Object Detection (YOLOv5 6.2) (detect_adapter.py)
Error An error occurred trying to start process '/usr/bin/codeproject.ai-server-2.3.3/runtimes/bin/linux/python38/venv/bin/python3' with working directory '/usr/bin/codeproject.ai-server-2.3.3/modules/ObjectDetectionYolo'. No such file or directory
Error at System.Diagnostics.Process.ForkAndExecProcess(ProcessStartInfo startInfo, String resolvedFilename, String[] argv, String[] envp, String cwd, Boolean setCredentials, UInt32 userId, UInt32 groupId, UInt32[] groups, Int32& stdinFd, Int32& stdoutFd, Int32& stderrFd, Boolean usesTerminal, Boolean throwOnNoExec)
at System.Diagnostics.Process.StartCore(ProcessStartInfo startInfo)
at CodeProject.AI.Server.Modules.ModuleProcessServices.StartProcess(ModuleConfig module)
Error *** Please check the CodeProject.AI installation completed successfully
Infor Module ObjectDetectionYolo started successfully.
The full module installation log:
2024-01-22 10:39:19: Installing CodeProject.AI Analysis Module
2024-01-22 10:39:19: ======================================================================
2024-01-22 10:39:19: CodeProject.AI Installer
2024-01-22 10:39:19: ======================================================================
2024-01-22 10:39:19: 20.09 GiB available on linux
2024-01-22 10:39:19: General CodeProject.AI setup
2024-01-22 10:39:19: Setting permissions...insufficient permission to set permissions
2024-01-22 10:39:19: GPU support
2024-01-22 10:39:20: Searching for nvidia-cuda-toolkit...installing... Done
2024-01-22 10:39:20: CUDA Present...Yes (version 11.4)
2024-01-22 10:39:20: ROCm Present...No
2024-01-22 10:39:20: Processing module FaceProcessing 1.8.1
2024-01-22 10:39:20: Installing Python 3.8
2024-01-22 10:39:24: WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
2024-01-22 10:39:27: Installing Python Library 3.8... Done
2024-01-22 10:39:27: Install failed: Unable to install Python 3.8
2024-01-22 10:39:27: Setup complete
Installer exited with code 1
System Info from the server dashboard:
Server version: 2.3.3-Beta
System: Linux
Operating System: Linux (Linux 6.1.0-17-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.69-1 (2023-12-30))
CPUs: Intel(R) Xeon(R) CPU E3-1220 v5 @ 3.00GHz (Intel)
1 CPU x 4 cores. 4 logical processors (x64)
GPU: Tesla P4 (7 GiB) (NVIDIA)
Driver: 470.223.02 CUDA: 11.4 (max supported: 11.4) Compute:
System RAM: 8 GiB
Target: Linux
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
.NET framework: .NET 7.0.15
Video adapter info:
Matrox Electronics Systems Ltd. G200eR2 (rev 01):
Driver Version
Video Processor
System GPU info:
GPU 3D Usage 2%
GPU RAM Usage 930 MiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
I'd really appreciate any advice. Thank you!
|
|
|
|
|
This could be a permissions issue.
You could go about this two ways:
- Stop the current server instance using whatever method takes your fancy
- Restart the service under sudo:
sudo bash /usr/bin/codeproject.ai-server/2.5.1/start.sh
(That path is off the top of my head so season to taste) - Try re-installing the module via the dashboard
or
- Head to the folder for the module that failed to install.
cd /usr/bin/codeproject.ai-server/2.5.1/modules/FaceProcessing - Run the setup manually, under sudo:
sudo bash ../../setup.sh - Try starting the module via the dashboard
One of them may help
cheers
Chris Maunder
|
|
|
|
|
Thank you for your reply. Unfortunately, it still does not work. It's still complaining of missing Python packages. I've looked at the root Python folder within codeproject, and it's empty. How do I install Python, please? Are you sure I don't need Python installed systemwide? Can I manually put Python in the folder?
kit@iron-domino:/usr/bin/codeproject.ai-server-2.3.3/runtimes/bin/linux/python38$ ls -a
. ..
kit@iron-domino:/usr/bin/codeproject.ai-server-2.3.3/runtimes/bin/linux/python38$
First method:
kit@iron-domino:~$ sudo bash /usr/bin/codeproject.ai-server-2.3.3/start.sh
Infor ** System: Linux
Infor ** Operating System: Linux (Linux 6.1.0-17-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.69-1 (2023-12-30))
Infor ** CPUs: Intel(R) Xeon(R) CPU E3-1220 v5 @ 3.00GHz (Intel)
Infor ** 1 CPU x 4 cores. 4 logical processors (x64)
Infor ** System RAM: 8 GiB
Infor ** Target: Linux
Infor ** BuildConfig: Release
Infor ** Execution Env: Native
Infor ** Runtime Env: Production
Infor ** .NET framework: .NET 7.0.15
Infor ** App DataDir: /etc/codeproject/ai
Infor Video adapter info:
Infor Matrox Electronics Systems Ltd. G200eR2 (rev 01):
Infor Driver Version
Infor Video Processor
Infor *** STARTING CODEPROJECT.AI SERVER
Infor RUNTIMES_PATH = /usr/bin/codeproject.ai-server-2.3.3/runtimes
Infor PREINSTALLED_MODULES_PATH = /usr/bin/codeproject.ai-server-2.3.3/preinstalled-modules
Infor MODULES_PATH = /usr/bin/codeproject.ai-server-2.3.3/modules
Infor PYTHON_PATH = /bin/linux/%PYTHON_DIRECTORY%/venv/bin/python3
Infor Data Dir = /etc/codeproject/ai
Infor ** Server version: 2.3.3-Beta
Server is listening on port 32168
Server is also listening on legacy port 5000
Trace ModuleRunner Start
Trace Starting Background AI Modules
Trace Command: /usr/bin/codeproject.ai-server-2.3.3/runtimes/bin/linux/python38/venv/bin/python3
Debug
Debug Attempting to start ObjectDetectionYolo with /usr/bin/codeproject.ai-server-2.3.3/runtimes/bin/linux/python38/venv/bin/python3 "/usr/bin/codeproject.ai-server-2.3.3/modules/ObjectDetectionYolo/detect_adapter.py"
Trace Starting /usr...untimes/bin/linux/python38/venv/bin/python3 "/usr...les/ObjectDetectionYolo/detect_adapter.py"
Infor
Infor ** Module 'Object Detection (YOLOv5 6.2)' 1.7.1 (ID: ObjectDetectionYolo)
Infor ** Module Path: /usr/bin/codeproject.ai-server-2.3.3/modules/ObjectDetectionYolo
Infor ** AutoStart: True
Infor ** Queue: objectdetection_queue
Infor ** Platforms: all
Infor ** GPU Libraries: installed if available
Infor ** GPU Enabled: enabled
Infor ** Parallelism: 0
Infor ** Accelerator:
Infor ** Half Precis.: enable
Infor ** Runtime: python3.8
Infor ** Runtime Loc: Shared
Infor ** FilePath: detect_adapter.py
Infor ** Pre installed: False
Infor ** Start pause: 1 sec
Infor ** LogVerbosity:
Infor ** Valid: True
Infor ** Environment Variables
Infor ** APPDIR = %CURRENT_MODULE_PATH%
Infor ** CUSTOM_MODELS_DIR = %CURRENT_MODULE_PATH%/custom-models
Infor ** MODELS_DIR = %CURRENT_MODULE_PATH%/assets
Infor ** MODEL_SIZE = Medium
Infor ** USE_CUDA = True
Infor ** YOLOv5_AUTOINSTALL = false
Infor ** YOLOv5_VERBOSE = false
Infor
Error Error trying to start Object Detection (YOLOv5 6.2) (detect_adapter.py)
Error An error occurred trying to start process '/usr/bin/codeproject.ai-server-2.3.3/runtimes/bin/linux/python38/venv/bin/python3' with working directory '/usr/bin/codeproject.ai-server-2.3.3/modules/ObjectDetectionYolo'. No such file or directory
Error at System.Diagnostics.Process.ForkAndExecProcess(ProcessStartInfo startInfo, String resolvedFilename, String[] argv, String[] envp, String cwd, Boolean setCredentials, UInt32 userId, UInt32 groupId, UInt32[] groups, Int32& stdinFd, Int32& stdoutFd, Int32& stderrFd, Boolean usesTerminal, Boolean throwOnNoExec)
at System.Diagnostics.Process.StartCore(ProcessStartInfo startInfo)
at CodeProject.AI.Server.Modules.ModuleProcessServices.StartProcess(ModuleConfig module)
Error *** Please check the CodeProject.AI installation completed successfully
Trace Command: /usr/bin/codeproject.ai-server-2.3.3/runtimes/bin/linux/python38/venv/bin/python3
Debug
Debug Attempting to start FaceProcessing with /usr/bin/codeproject.ai-server-2.3.3/runtimes/bin/linux/python38/venv/bin/python3 "/usr/bin/codeproject.ai-server-2.3.3/modules/FaceProcessing/intelligencelayer/face.py"
Trace Starting /usr...untimes/bin/linux/python38/venv/bin/python3 "/usr.../FaceProcessing/intelligencelayer/face.py"
Infor
Infor ** Module 'Face Processing' 1.8.1 (ID: FaceProcessing)
Infor ** Module Path: /usr/bin/codeproject.ai-server-2.3.3/modules/FaceProcessing
Infor ** AutoStart: True
Infor ** Queue: faceprocessing_queue
Infor ** Platforms: windows,linux,linux-arm64,macos,macos-arm64
Infor ** GPU Libraries: installed if available
Infor ** GPU Enabled: enabled
Infor ** Parallelism: 0
Infor ** Accelerator:
Infor ** Half Precis.: enable
Infor ** Runtime: python3.8
Infor ** Runtime Loc: Shared
Infor ** FilePath: intelligencelayer\face.py
Infor ** Pre installed: False
Infor ** Start pause: 3 sec
Infor ** LogVerbosity:
Infor ** Valid: True
Infor ** Environment Variables
Infor ** APPDIR = %CURRENT_MODULE_PATH%\intelligencelayer
Infor ** DATA_DIR = %DATA_DIR%
Infor ** MODE = MEDIUM
Infor ** MODELS_DIR = %CURRENT_MODULE_PATH%\assets
Infor ** PROFILE = desktop_gpu
Infor ** USE_CUDA = True
Infor ** YOLOv5_AUTOINSTALL = false
Infor ** YOLOv5_VERBOSE = false
Infor
Error Error trying to start Face Processing (intelligencelayer\face.py)
Error An error occurred trying to start process '/usr/bin/codeproject.ai-server-2.3.3/runtimes/bin/linux/python38/venv/bin/python3' with working directory '/usr/bin/codeproject.ai-server-2.3.3/modules/FaceProcessing'. No such file or directory
Error at System.Diagnostics.Process.ForkAndExecProcess(ProcessStartInfo startInfo, String resolvedFilename, String[] argv, String[] envp, String cwd, Boolean setCredentials, UInt32 userId, UInt32 groupId, UInt32[] groups, Int32& stdinFd, Int32& stdoutFd, Int32& stderrFd, Boolean usesTerminal, Boolean throwOnNoExec)
at System.Diagnostics.Process.StartCore(ProcessStartInfo startInfo)
at CodeProject.AI.Server.Modules.ModuleProcessServices.StartProcess(ModuleConfig module)
Error *** Please check the CodeProject.AI installation completed successfully
Warning: unknown mime-type for "http://localhost:32168" -- using "application/octet-stream"
Error: no such file "http://localhost:32168"
Debug Current Version is 2.3.3-Beta
Infor *** A new version 2.3.4-Beta is available
Infor FaceProcessing has left the building
Module files removed. Setting module state to Available
Infor Preparing to install module 'FaceProcessing'
Infor Downloading module 'FaceProcessing'
Infor Installing module 'FaceProcessing'
Debug Installer script at '/usr/bin/codeproject.ai-server-2.3.3/setup.sh'
Infor FaceProcessing: Installing CodeProject.AI Analysis Module
Infor FaceProcessing: ======================================================================
Infor FaceProcessing: CodeProject.AI Installer
Infor FaceProcessing: ======================================================================
Infor FaceProcessing: 10.03 GiB available on linux
Infor FaceProcessing: General CodeProject.AI setup
Infor FaceProcessing: Setting permissions...Done
Infor FaceProcessing: GPU support
Infor FaceProcessing: CUDA Present...Yes (version 11.8)
Infor FaceProcessing: ROCm Present...No
Infor FaceProcessing: Processing module FaceProcessing 1.8.1
Infor FaceProcessing: Installing Python 3.8
Error FaceProcessing: WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
Infor FaceProcessing: Installing Python Library 3.8... Done
Infor FaceProcessing: Install failed: Unable to install Python 3.8
Infor FaceProcessing: Setup complete
Infor Module FaceProcessing installed successfully.
Trace Command: /usr/bin/codeproject.ai-server-2.3.3/runtimes/bin/linux/python38/venv/bin/python3
Debug
Debug Attempting to start FaceProcessing with /usr/bin/codeproject.ai-server-2.3.3/runtimes/bin/linux/python38/venv/bin/python3 "/usr/bin/codeproject.ai-server-2.3.3/modules/FaceProcessing/intelligencelayer/face.py"
Trace Starting /usr...untimes/bin/linux/python38/venv/bin/python3 "/usr.../FaceProcessing/intelligencelayer/face.py"
Infor
Infor ** Module 'Face Processing' 1.8.1 (ID: FaceProcessing)
Infor Installer exited with code 1
Infor ** Module Path: /usr/bin/codeproject.ai-server-2.3.3/modules/FaceProcessing
Infor ** AutoStart: True
Infor ** Queue: faceprocessing_queue
Infor ** Platforms: windows,linux,linux-arm64,macos,macos-arm64
Infor ** GPU Libraries: installed if available
Infor ** GPU Enabled: enabled
Infor ** Parallelism: 0
Infor ** Accelerator:
Infor ** Half Precis.: enable
Infor ** Runtime: python3.8
Infor ** Runtime Loc: Shared
Infor ** FilePath: intelligencelayer\face.py
Infor ** Pre installed: False
Infor ** Start pause: 3 sec
Infor ** LogVerbosity:
Infor ** Valid: True
Infor ** Environment Variables
Infor ** APPDIR = %CURRENT_MODULE_PATH%\intelligencelayer
Infor ** DATA_DIR = %DATA_DIR%
Infor ** MODE = MEDIUM
Infor ** MODELS_DIR = %CURRENT_MODULE_PATH%\assets
Infor ** PROFILE = desktop_gpu
Infor ** USE_CUDA = True
Infor ** YOLOv5_AUTOINSTALL = false
Infor ** YOLOv5_VERBOSE = false
Infor
Error Error trying to start Face Processing (intelligencelayer\face.py)
Error An error occurred trying to start process '/usr/bin/codeproject.ai-server-2.3.3/runtimes/bin/linux/python38/venv/bin/python3' with working directory '/usr/bin/codeproject.ai-server-2.3.3/modules/FaceProcessing'. No such file or directory
Error at System.Diagnostics.Process.ForkAndExecProcess(ProcessStartInfo startInfo, String resolvedFilename, String[] argv, String[] envp, String cwd, Boolean setCredentials, UInt32 userId, UInt32 groupId, UInt32[] groups, Int32& stdinFd, Int32& stdoutFd, Int32& stderrFd, Boolean usesTerminal, Boolean throwOnNoExec)
at System.Diagnostics.Process.StartCore(ProcessStartInfo startInfo)
at CodeProject.AI.Server.Modules.ModuleProcessServices.StartProcess(ModuleConfig module)
Error *** Please check the CodeProject.AI installation completed successfully
Infor Module FaceProcessing started successfully.
Infor ObjectDetectionYolo has left the building
Module files removed. Setting module state to Available
Infor Preparing to install module 'ObjectDetectionYolo'
Infor Downloading module 'ObjectDetectionYolo'
Infor Installing module 'ObjectDetectionYolo'
Debug Installer script at '/usr/bin/codeproject.ai-server-2.3.3/setup.sh'
Infor ObjectDetectionYolo: Installing CodeProject.AI Analysis Module
Infor ObjectDetectionYolo: ======================================================================
Infor ObjectDetectionYolo: CodeProject.AI Installer
Infor ObjectDetectionYolo: ======================================================================
Infor ObjectDetectionYolo: 10.03 GiB available on linux
Infor ObjectDetectionYolo: General CodeProject.AI setup
Infor ObjectDetectionYolo: Setting permissions...Done
Infor ObjectDetectionYolo: GPU support
Infor ObjectDetectionYolo: CUDA Present...Yes (version 11.8)
Infor ObjectDetectionYolo: ROCm Present...No
Infor ObjectDetectionYolo: Processing module ObjectDetectionYolo 1.7.1
Infor ObjectDetectionYolo: Installing Python 3.8
Error ObjectDetectionYolo: WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
Infor ObjectDetectionYolo: Installing Python Library 3.8... Done
Infor ObjectDetectionYolo: Install failed: Unable to install Python 3.8
Infor ObjectDetectionYolo: Setup complete
Infor Module ObjectDetectionYolo installed successfully.
Infor Installer exited with code 1
Trace Command: /usr/bin/codeproject.ai-server-2.3.3/runtimes/bin/linux/python38/venv/bin/python3
Debug
Debug Attempting to start ObjectDetectionYolo with /usr/bin/codeproject.ai-server-2.3.3/runtimes/bin/linux/python38/venv/bin/python3 "/usr/bin/codeproject.ai-server-2.3.3/modules/ObjectDetectionYolo/detect_adapter.py"
Trace Starting /usr...untimes/bin/linux/python38/venv/bin/python3 "/usr...les/ObjectDetectionYolo/detect_adapter.py"
Infor
Infor ** Module 'Object Detection (YOLOv5 6.2)' 1.7.1 (ID: ObjectDetectionYolo)
Infor ** Module Path: /usr/bin/codeproject.ai-server-2.3.3/modules/ObjectDetectionYolo
Infor ** AutoStart: True
Infor ** Queue: objectdetection_queue
Infor ** Platforms: all
Infor ** GPU Libraries: installed if available
Infor ** GPU Enabled: enabled
Infor ** Parallelism: 0
Infor ** Accelerator:
Infor ** Half Precis.: enable
Infor ** Runtime: python3.8
Infor ** Runtime Loc: Shared
Infor ** FilePath: detect_adapter.py
Infor ** Pre installed: False
Infor ** Start pause: 1 sec
Infor ** LogVerbosity:
Infor ** Valid: True
Infor ** Environment Variables
Infor ** APPDIR = %CURRENT_MODULE_PATH%
Infor ** CUSTOM_MODELS_DIR = %CURRENT_MODULE_PATH%/custom-models
Infor ** MODELS_DIR = %CURRENT_MODULE_PATH%/assets
Infor ** MODEL_SIZE = Medium
Infor ** USE_CUDA = True
Infor ** YOLOv5_AUTOINSTALL = false
Infor ** YOLOv5_VERBOSE = false
Infor
Error Error trying to start Object Detection (YOLOv5 6.2) (detect_adapter.py)
Error An error occurred trying to start process '/usr/bin/codeproject.ai-server-2.3.3/runtimes/bin/linux/python38/venv/bin/python3' with working directory '/usr/bin/codeproject.ai-server-2.3.3/modules/ObjectDetectionYolo'. No such file or directory
Error at System.Diagnostics.Process.ForkAndExecProcess(ProcessStartInfo startInfo, String resolvedFilename, String[] argv, String[] envp, String cwd, Boolean setCredentials, UInt32 userId, UInt32 groupId, UInt32[] groups, Int32& stdinFd, Int32& stdoutFd, Int32& stderrFd, Boolean usesTerminal, Boolean throwOnNoExec)
at System.Diagnostics.Process.StartCore(ProcessStartInfo startInfo)
at CodeProject.AI.Server.Modules.ModuleProcessServices.StartProcess(ModuleConfig module)
Error *** Please check the CodeProject.AI installation completed successfully
Infor Module ObjectDetectionYolo started successfully.
Second method:
kit@iron-domino:/usr/bin/codeproject.ai-server-2.3.3/modules/FaceProcessing$ sudo bash ../../setup.sh
Installing CodeProject.AI Analysis Module
======================================================================
CodeProject.AI Installer
======================================================================
10.03 GiB available on linux
General CodeProject.AI setup
Setting permissions...Done
GPU support
CUDA Present...Yes (version 11.8)
ROCm Present...No
Processing module FaceProcessing 1.8.1
Installing Python 3.8
Installing Python Library 3.8...
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
Done
Install failed: Unable to install Python 3.8
Setup complete
kit@iron-domino:/usr/bin/codeproject.ai-server-2.3.3/modules/FaceProcessing$
|
|
|
|
|
.... anyone? I need to be able to fix this, please.
Do I need Python pre-installed? If so, which packages would I need? Thank you!
|
|
|
|
|
Please Help
|
|
|
|
|
This has been fixed in the latest. I'd recommend trying the 2.5.1-RC1 installation.
cheers
Chris Maunder
|
|
|
|
|
Hello, I'm trying to add Super Resolution to my CodeProject.AI_ServerGPU Docker container running on UNRAID. I went into the container and clicked Install Modules and clicked Install beside Super Resolution. I get the below log within the Server logs tab in the container. I also tried letting the container run as Privileged with no luck.
Is there a different way I need to follow to install additional Modules within this container when using docker than using the Install Modules button?
My environment details:
OS: UNRAID 6.12.6
Docker container: CodeProject.AI_ServerGPU version 2.2.4-Beta
GPU: NVIDIA GeForce GTX 1050 Ti on driver latest: v545.29.06
CUDA Version 12.3
17:34:12:Preparing to install module 'SuperResolution'
17:34:12:Downloading module 'SuperResolution'
17:34:13:Installing module 'SuperResolution'
17:34:13:SuperResolution: Hi Docker! We will disable shared python installs for downloaded modules
17:34:13:SuperResolution: No schemas installed
17:34:13:SuperResolution: (No schemas means: we can't detect if you're in light or dark mode)
17:34:13:SuperResolution: sh: 1: lsmod: not found
17:34:13:SuperResolution: Installing CodeProject.AI Analysis Module
17:34:13:SuperResolution: ======================================================================
17:34:13:SuperResolution: CodeProject.AI Installer
17:34:13:SuperResolution: ======================================================================
17:34:13:SuperResolution: 66.02 GiB available
17:34:13:SuperResolution: Installing curl...
17:34:13:SuperResolution: WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
17:34:14:SuperResolution: E: Failed to fetch http:
17:34:14:SuperResolution: E: Failed to fetch http:
17:34:14:SuperResolution: E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
17:34:14:Module SuperResolution installed successfully.
17:34:14:
17:34:14:Module 'Super Resolution' 1.6 (ID: SuperResolution)
17:34:14:Installer exited with code 10
17:34:14:Module Path: /app/modules/SuperResolution
17:34:14:AutoStart: True
17:34:14:Queue: superresolution_queue
17:34:14:Platforms: windows,linux,linux-arm64,macos,macos-arm64
17:34:14:GPU: Support disabled
17:34:14:Parallelism: 1
17:34:14:Accelerator:
17:34:14:Half Precis.: enable
17:34:14:Runtime: python38
17:34:14:Runtime Loc: Local
17:34:14:FilePath: superres_adapter.py
17:34:14:Pre installed: False
17:34:14:Start pause: 0 sec
17:34:14:LogVerbosity:
17:34:14:Valid: True
17:34:14:Environment Variables
17:34:14:PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION = python
17:34:14:
17:34:14:Error trying to start Super Resolution (superres_adapter.py)
17:34:14:Module SuperResolution started successfully.
17:34:14:An error occurred trying to start process '/app/modules/SuperResolution/bin/linux/python38/venv/bin/python3' with working directory '/app/modules/SuperResolution'. No such file or directory
17:34:14: at System.Diagnostics.Process.ForkAndExecProcess(ProcessStartInfo startInfo, String resolvedFilename, String[] argv, String[] envp, String cwd, Boolean setCredentials, UInt32 userId, UInt32 groupId, UInt32[] groups, Int32& stdinFd, Int32& stdoutFd, Int32& stderrFd, Boolean usesTerminal, Boolean throwOnNoExec)
at System.Diagnostics.Process.StartCore(ProcessStartInfo startInfo)
at CodeProject.AI.Server.Modules.ModuleProcessServices.StartProcess(ModuleConfig module)
17:34:14:Please check the CodeProject.AI installation completed successfully
17:34:15:Call to Install on module SuperResolution has completed.
|
|
|
|
|
You may want to try the newer 2.5.1 image. It's only a release candidate at this point, but could solve that issue.
cheers
Chris Maunder
|
|
|
|
|
Hey Chris, thanks for the suggestion
I installed the cuda12_2-2.5.1 image (codeproject/ai-server:cuda12_2-2.5.1) and confirmed it's still processing faces fine and connected to my GPU.
When I try to install Super Resolution I get the error "Error in Install SuperResolution: Call failed.". Server log just says
13:06:51:Call failed
I tried installing other modules and I get this same "Call Failed" message for any of them - Cartoonizer, Text summary, etc. Changing Server Logs to Trace doesn't give me any more info, just "Call failed" in red text.
|
|
|
|
|
|
Thanks Steve!
I tried it in Edge (instead of Chrome) and with Dev Tools open - I'm not sure which fixed it, but looks like it worked. Got the below output, it was a success! I'm able to start Super Resolution.
Sadly it doesn't look like Double Take is configured to use Super Resolution though - Do you guys know what I need to do to have the image upgraded with Super Resolution before running the face processing on it? I guess that's a Double Take config on how to pass the command to CodeProjectAI
14:33:09:Preparing to install module 'SuperResolution'
14:33:09:Downloading module 'SuperResolution'
14:33:09:Installing module 'SuperResolution'
14:33:09:SuperResolution: Setting verbosity to quiet
14:33:09:SuperResolution: Hi Docker! We will disable shared python installs for downloaded modules
14:33:09:SuperResolution: No schemas installed
14:33:09:SuperResolution: (No schemas means: we can't detect if you're in light or dark mode)
14:33:09:SuperResolution: Installing CodeProject.AI Analysis Module
14:33:09:SuperResolution: ======================================================================
14:33:09:SuperResolution: CodeProject.AI Installer
14:33:09:SuperResolution: ======================================================================
14:33:09:SuperResolution: 57.07 GiB of 102.03 GiB available on Docker
14:33:09:SuperResolution: General CodeProject.AI setup
14:33:09:SuperResolution: Setting permissions on downloads folder...Done
14:33:09:SuperResolution: Setting permissions on runtimes folder...Done
14:33:09:SuperResolution: Setting permissions on persisted data folder...Done
14:33:09:SuperResolution: GPU support
14:33:40:Response timeout. Try increasing the timeout value
14:36:03:SuperResolution: Searching for nvidia-cuda-toolkit...installing... Done
14:36:03:SuperResolution: CUDA (NVIDIA) Present: Yes (CUDA 11.5, No cuDNN found)
14:36:05:SuperResolution: ROCm (AMD) Present: (attempt to install rocminfo... ) No
14:36:05:SuperResolution: MPS (Apple) Present: No
14:36:06:SuperResolution: Reading module settings.......Done
14:36:06:SuperResolution: Processing module SuperResolution 1.8.3
14:36:06:SuperResolution: Installing Python 3.8
14:36:06:SuperResolution: Python 3.8 is already installed
14:36:09:SuperResolution: Ensuring PIP in base python install... done
14:36:11:SuperResolution: Upgrading PIP in base python install... done
14:36:11:SuperResolution: Installing Virtual Environment tools for Linux...
14:36:13:SuperResolution: Searching for python3-pip python3-setuptools python3.8...All good.
14:36:22:SuperResolution: Creating Virtual Environment (Local)... Done
14:36:22:SuperResolution: Checking for Python 3.8...(Found Python 3.8.18) All good
14:36:40:SuperResolution: Upgrading PIP in virtual environment... done
14:36:45:SuperResolution: Installing updated setuptools in venv... Done
14:36:45:SuperResolution: No custom setup steps for this module
14:36:45:SuperResolution: Installing Python packages for Super Resolution
14:36:45:SuperResolution: Installing GPU-enabled libraries: No
14:36:46:SuperResolution: Searching for python3-pip...All good.
14:36:48:SuperResolution: Ensuring PIP compatibility... Done
14:36:48:SuperResolution: Python packages will be specified by requirements.linux.txt
14:37:38:SuperResolution: - Installing ONNX, the Open Neural Network Exchange library... (✔️ checked) Done
14:38:03:SuperResolution: - Installing ONNX runtime, the scoring engine for ONNX models... (✔️ checked) Done
14:38:11:SuperResolution: - Installing resizeimage, which provides functions for easily resizing images... (✔️ checked) Done
14:38:12:SuperResolution: - Installing Pillow, a Python Image Library...Already installed
14:42:56:SuperResolution: - Installing Torch, for Tensor computation and Deep neural networks... (✔️ checked) Done
14:42:57:SuperResolution: - Installing NumPy, a package for scientific computing...Already installed
14:42:57:SuperResolution: Installing Python packages for the CodeProject.AI Server SDK
14:42:58:SuperResolution: Searching for python3-pip...All good.
14:43:00:SuperResolution: Ensuring PIP compatibility... Done
14:43:00:SuperResolution: Python packages will be specified by requirements.txt
14:43:01:SuperResolution: - Installing Pillow, a Python Image Library...Already installed
14:43:02:SuperResolution: - Installing Charset normalizer...Already installed
14:43:10:SuperResolution: - Installing aiohttp, the Async IO HTTP library... (✔️ checked) Done
14:43:15:SuperResolution: - Installing aiofiles, the Async IO Files library... (✔️ checked) Done
14:43:20:SuperResolution: - Installing py-cpuinfo to allow us to query CPU info... (✔️ checked) Done
14:43:21:SuperResolution: - Installing Requests, the HTTP library...Already installed
14:43:33:SuperResolution: 2024-01-22 14:43:33.784350647 [W:onnxruntime:, graph.cc:1283 Graph] Initializer conv1.bias appears in graph inputs and will not be treated as constant value/weight. This may prevent some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
14:43:33:SuperResolution: 2024-01-22 14:43:33.784369783 [W:onnxruntime:, graph.cc:1283 Graph] Initializer conv1.weight appears in graph inputs and will not be treated as constant value/weight. This may prevent some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
14:43:33:SuperResolution: 2024-01-22 14:43:33.784374033 [W:onnxruntime:, graph.cc:1283 Graph] Initializer conv2.bias appears in graph inputs and will not be treated as constant value/weight. This may prevent some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
14:43:33:SuperResolution: 2024-01-22 14:43:33.784377374 [W:onnxruntime:, graph.cc:1283 Graph] Initializer conv2.weight appears in graph inputs and will not be treated as constant value/weight. This may prevent some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
14:43:33:SuperResolution: 2024-01-22 14:43:33.784380425 [W:onnxruntime:, graph.cc:1283 Graph] Initializer conv3.bias appears in graph inputs and will not be treated as constant value/weight. This may prevent some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
14:43:33:SuperResolution: 2024-01-22 14:43:33.784383500 [W:onnxruntime:, graph.cc:1283 Graph] Initializer conv3.weight appears in graph inputs and will not be treated as constant value/weight. This may prevent some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
14:43:33:SuperResolution: 2024-01-22 14:43:33.784386923 [W:onnxruntime:, graph.cc:1283 Graph] Initializer conv4.bias appears in graph inputs and will not be treated as constant value/weight. This may prevent some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
14:43:33:SuperResolution: 2024-01-22 14:43:33.784389978 [W:onnxruntime:, graph.cc:1283 Graph] Initializer conv4.weight appears in graph inputs and will not be treated as constant value/weight. This may prevent some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
14:43:34:SuperResolution: Self test: Self-test passed
14:43:34:SuperResolution: Module setup time 00:07:29
14:43:34:SuperResolution: Setup complete
14:43:34:SuperResolution: Total setup time 00:10:25
14:43:34:Module SuperResolution installed successfully.
14:43:34:Installer exited with code 0
14:43:34:Module SuperResolution not configured to AutoStart.
|
|
|
|
|
My guess is there was a caching issue
cheers
Chris Maunder
|
|
|
|
|
Guten Morgen,
Wo finde ich die aktuelle Docker Version? Von CodeProject?
Where can I find the current Docker version? From CodeProject?
|
|
|
|
|
|
Currently using 2.2.4 and Yolo 5.6.2 is working great with my GPU.
I tried upgrading to 2.3.4 and Yolo 5.6.2 would not use my GPU.
GPU was detected but it would not use it.
Any idea what I am doing wrong?
( deleted programfiles/codeproject and programdata/codeproject )
Server version: 2.2.4-Beta
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz (Intel)
1 CPU x 4 cores. 8 logical processors (x64)
GPU: NVIDIA GeForce GTX 1650 (4 GiB) (NVIDIA)
Driver: 536.23 CUDA: 12.2 (max supported: 12.2) Compute: 7.5
System RAM: 32 GiB
Target: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
.NET framework: .NET 7.0.5
Video adapter info:
Intel(R) HD Graphics 4600:
Driver Version 20.19.15.4624
Video Processor Intel(R) HD Graphics Family
Microsoft Remote Display Adapter:
Driver Version 10.0.19041.3636
Video Processor
NVIDIA GeForce GTX 1650:
Driver Version 31.0.15.3623
Video Processor NVIDIA GeForce GTX 1650
System GPU info:
GPU 3D Usage 1%
GPU RAM Usage 2.3 GiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|