|
heh. I look at this way - if we don't have a mathematical model for it then we're limited in the sorts of transformations we can do the code.
Why would anyone want to transform code? A compiler does just that. A mathematical model lends itself to rigorous checking as well.
I'm not a purist about it, but I certainly see the advantages of it and it's one of the reasons I'm fond of functional programming.
Real programmers use butterflies
|
|
|
|
|
The problem isn't OO; slavish fanatical adherence to anything at all screws everything up -- and it's certainly non-evolutionary. Doing things one way and one way only results in restrictions to growth and expansion.
Given the above immutable fact, rigid adherence to OO practices is obviously wrong before even going into details, so I won't waste my time going into any (plus I don't have a week to spare).
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
I have a Console application that disagrees
using System;
namespace ConsoleApp1
{
class Program
{
static void Main()
{
IMessageGetter messageGetter = new BoohCodewitchMessageGetter();
IMessagePrinter messagePrinter = new ConsoleMessagePrinter();
IInputAwaiter inputAwaiter = new ConsoleInputAwaiter();
string message = messageGetter.GetMessage();
messagePrinter.PrintMessage(message);
inputAwaiter.AwaitInput();
}
}
public interface IMessageGetter
{
string GetMessage();
}
public interface IMessagePrinter
{
void PrintMessage(string message);
}
public interface IInputAwaiter
{
void AwaitInput();
}
public abstract class BaseMessageGetter : IMessageGetter
{
public abstract string GetMessage();
}
public abstract class BaseMessagePrinter : IMessagePrinter
{
public abstract void PrintMessage(string message);
}
public abstract class BaseInputAwaiter : IInputAwaiter
{
public abstract void AwaitInput();
}
public class BoohCodewitchMessageGetter : BaseMessageGetter
{
public override string GetMessage() => "Booh codewitch, your opinion sucks!";
}
public class ConsoleMessagePrinter : BaseMessagePrinter
{
public override void PrintMessage(string message) => Console.WriteLine(message);
}
public class ConsoleInputAwaiter : BaseInputAwaiter
{
public override void AwaitInput() => Console.ReadKey();
}
}
|
|
|
|
|
*headdesk*
Real programmers use butterflies
|
|
|
|
|
That code is uber 1337!
But usually...
TL;DR: I agree with your post.
The long version:
I tend to write a bunch of interfaces (as necessary) that explain the function of the code.
Take, for example, an IUserRepository.
When I see a (ASP.NET Core) Controller being injected with an IUserRepository I know this Controller does something with users.
I don't know (or care) where the users come from, but I know I need them.
If you look at the specific code that uses the IUserRepository you'll find stuff like userRepository.GetUser(id), which is way more descriptive than some code that accesses a database.
So in that sense, I often use classes and methods to describe what my code is doing.
That, for me, and to lesser extent re-use of code, are the biggest pros of OOP.
I'm not a big fan of re-use anymore.
Back in the day I re-used all the things, but just because two pieces of code incidentally need the same results doesn't mean they do the same thing.
I now make a clear split of functional re-use and technical re-use.
Functional re-use is rare, because that would mean a user has two ways to do the exact same thing.
It happens, but not all that often.
I think I write my code less "OOP" than seven or even five years ago.
The OOP I still write is more architectural in nature (like I now make heavy use of DI and interfaces, but not so much of base classes and such).
I've written some simple programs in Haskell, a purely functional language, but I think that doesn't work all that well.
It comes natural to think in objects and to have side effects at some point.
Nevertheless I started to write more functional in my OOP code, mostly no side effects.
I'm pretty sure my bug-to-code ratio went down since I've employed the no side effects approach.
A function just does its thing and produces a result, but it won't affect the overall flow or state of the program.
All the results come together in the calling function, mostly a controller, and then I do all the side effects in one spot.
Makes the code a lot easier to read and you have a lot less to think about.
It's still OOP, so it doesn't always work like that, but I try when I can.
Another change in my code is the use of delegates instead of one-function interfaces.
Makes for less abstraction and classes and it's still easy to read.
The biggest game changer for me, and this saved me a lot of bugs, was when I started to use curly braces for one line if and loop statements though
|
|
|
|
|
You are quarantined for the next 2 weeks to work with only Vb6.
|
|
|
|
|
That's not a quaratine, that's a punishment
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
Punishment of a cruel and unusual nature.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
NOOOOOOOOOOOOOOOOOOOO
Real programmers use butterflies
|
|
|
|
|
I do what I want to do - what makes sense for what I'm doing.
Every now and then I'll be inspired to wrap functionality into a class - as much for readability as anything else.
Probably because I grew up with that old fashioned idea of a .lib file or something.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
That's exactly what I do, even for the sake of encapsulation.
The only difference being a translation unit instead of a class.
Any way that gets the job done is fine.
|
|
|
|
|
Let's face it: C is a successful programming language. C++ has its drawbacks. Java is a pile of crap.
In this regard, how good is OOP?
|
|
|
|
|
C# is pretty great, but then I'm just being difficult.
Real programmers use butterflies
|
|
|
|
|
C# enforces OOP. That's no good.
|
|
|
|
|
OO isn't a problem unless you turn it into a problem.
Unfortunately, a lot of people manage to.
Real programmers use butterflies
|
|
|
|
|
Lua (and C++), for instance, doesn't do that and it simply feels better sometimes.
|
|
|
|
|
I've seen plenty of people, including profs who should darn well know better try to use C++ as an object oriented language.
It's one of my peeves.
I want to buy anyone that does it a copy of Accelerated C++ by Andrew Koenig and Barbara Moo, so that they can learn the more effective way to abstract in C++
Real programmers use butterflies
|
|
|
|
|
The problem with professors is they want to teach their students 'low level stuff', like, for instance, arrays, using C++ . It can be done, of course, but it isn't, in my opinion, the smartest way to start teaching C++ .
Might be there are also very-old-school teachers that don't appreciate (or simply are unaware of) the powerful OOP support C++ provides. But I believe this is a negligible minority.
|
|
|
|
|
generic programming is something anyone can learn easily for about $25-$30 using Accelerated C++ - too bad it's not a textbook
Real programmers use butterflies
|
|
|
|
|
That's self-teaching.
Professors, on the other hand, exist for different purpose (produce chaos in student minds).
|
|
|
|
|
CPallini wrote: C# enforces OOP. That's no good. I think the real world is similar. They always want me to, say, distinguish between different people when I see them as a homogenous grey map. They even try to tell that this "object" belongs to "that" object, while I think I should be free to use anything the way I want to. They even say that there are things I am not allowed to look at, it is their "private life". This idea of the world being split into distinct "objects" really bothers me.
|
|
|
|
|
Now you are going philosophical.
|
|
|
|
|
Any tool even a lame one is as good as a person using it.
OOP is great when used and applied correctly.
|
|
|
|
|
You know, the right tool for the right job. For certain jobs OOP is simply not the right tool.
|
|
|
|
|
Agree...
More and more I'm starting to think we have been going the wrong way.
The article that really had me started thinking about this was this one:
https://medium.com/better-programming/object-oriented-programming-the-trillion-dollar-disaster-92a4b666c7c7
Excellent article.
The simplest pieces of code we try to make so abstract that at some point it doesn't make sense anymore and gets hard to understand. You end op with classes like: OrderManagerProviderOrchestrator or OrderFactoryStrategy. And all of this because, you know, SOLID, KISS, abstraction, dependency injection, blah blah blah,...
We spend so much time making code that way, making it independent, scaleable, etc. But in the end, whenever some change it necessary: oh no, this means we have to refactor everything!
|
|
|
|