Hi Gerry, I followed suit as the more experienced posters tend to send al AI questions to the AI forum because, like I do, we tend to think that a better answer or response will be achieved there than in the Q&A.
The redirects are usually to people who post questions relating to the CodeProject AI, and we point them to the CodeProject.AI Discussions[^] forum, rather than the general AI one here. And yes, I agree it can be confusing.
This is not remotely in my field of expertise, I decided to ask AI about an AI question. Please check and confirm each suggestion as it is commonly known that false values are given by ChatGPT in certain scenarios. Suggestions given by ChatGPT -
You can explore more advanced techniques like polynomial regression, decision trees, random forests, support vector machines, and neural networks.
Polynomial Regression: This involves fitting a polynomial equation to the data, allowing for curved relationships between X and Y. You can experiment with different polynomial degrees to capture varying degrees of complexity.
Decision Trees and Random Forests: Decision trees split the data based on different conditions and can handle both linear and non-linear relationships. Random forests are an ensemble of decision trees that can handle complex relationships more effectively.
Support Vector Machines (SVM): SVMs are good for both linear and non-linear patterns. They aim to find a hyperplane that best separates the data while considering a "kernel trick" to map the data into a higher-dimensional space for better separation.
Neural Networks: Neural networks are capable of capturing complex patterns and relationships in data. You can design a network with multiple layers and nodes to model intricate non-linear connections between X and Y.
Gradient Boosting: This ensemble technique combines multiple weak learners (usually decision trees) to create a strong predictive model. It's powerful for capturing complex relationships.
K-Nearest Neighbors (KNN): This instance-based learning algorithm uses nearby data points to predict the value of a new point. It can capture local patterns that may not be apparent with global models.
Clustering Algorithms: If you're interested in discovering inherent patterns in your data without a specific Y variable, clustering techniques like k-means or hierarchical clustering might be useful.
Feature Engineering: Transform your features (X variables) to create new ones that might capture complex relationships more effectively. This might involve interactions, logarithmic transformations, or other functions.
Time Series Analysis: If your data has a temporal component, time series techniques like ARIMA (AutoRegressive Integrated Moving Average) or LSTM (Long Short-Term Memory) networks could be useful.
Dimensionality Reduction: Techniques like Principal Component Analysis (PCA) or t-SNE (t-Distributed Stochastic Neighbor Embedding) can help you visualize and analyze high-dimensional data.
Remember that selecting the best technique depends on your specific data and problem. Experiment with different methods, possibly even combining some, to see which one works best for uncovering the patterns you're interested in. Also, don't forget to split your data into training and testing sets to evaluate the performance of each technique accurately.
End of ChatGPT, I hope this points you in the right direction.
I'm curious if anyone on these forums or project are familiar with or have attended / published / co-authored at either USENIX or OSDI.
I attended this week, and there's some absolutely staggering things happening in the ML/Ai world beyond what's being reported, especially in both hardware and simple scope.
Some of the papers and case studies absolutely floored me. But others showed that there's a lot of open source advancements that aren't paywalled out there (unless you consider minor things like access to Nvidia H100s and such a pay wall haha)
The first statement in Prolog is an assignment where LP is defined as the negation of the truth value of LP itself. This means that LP is the logical negation of LP, indicating that LP is false if it is true, and true if it is false.
The second statement is a query that checks if there exists a unification between LP and not(true(LP)). It attempts to find a consistent value for LP that satisfies the equation.
In this case, the result of the query is false, indicating that there is no valid unification between LP and not(true(LP)). In other words, there is no consistent value that can simultaneously satisfy the equation LP = not(true(LP)). This suggests that there is a contradiction in the logic, as the equation cannot hold true for any value of LP.
It turns out that ChatGPT is incorrect because Prolog is merely recognizing the pathological self-reference of LP. "true" and "not" in the above context are meaningless placeholders to Prolog.
Prolog equally rejects this expression: X = foo(bar(X)).
I was trying to show that Prolog recognizes and rejects the Liar Paradox. It turns out the Prolog recognizes and rejects every expression that is isomorphic to the Liar Paradox.
Stopping short of achieving your level of incompetence.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
Has anyone else had success running CodeProject AI in a Docker container? I've spent days trying to troubleshoot what is going on with my deployment without success.
Here's my setup: At the base I'm running ESXi. Blue Iris is running in a Win10 VM. For the Docker setup, I'm running PhotonOS in a VM, with Portainer on top to give me a GUI for Docker. Inside Docker, I'm pulling in the codeproject/ai-server image. I'm using macvlan as the networking config to give it an IP on the LAN.
The CPAI container spins up fine, I can access the web GUI fine. I can ping it from the BI server. But it doesn't respond to any kind of requests. I've noticed that if I go into the CPAI Explorer, none of the models are showing up except for a few that appear to be defaults. If I load up an image in Explorer and ask it to analyze it, nothing happens. I just get a timeout and no logs generated.
I've triple checked that I've got the model folder mapped to "/app/modules/ObjectDetectionYolo/custom-models". I've ssh'd to the container and verified the models show up in there, but CPAI doesn't seem to recognize them.
That was an error in the module listing. You'll need to wait till server 2.1 comes out (in alpha testing today) for the update to be accessible. However: the update is only a structural update to fit the new architecture of server 2.1, so you're not missing anything major.