|Yeah, I've played this game before, and while it's nice to think that everything is in memory and so it should be easy, there are some surprising hurdles that need to be cleared first. Here are some of my IPC notes.
The most important initial concern that you need to address is: what is the exact nature of the communications that need to occur between the services? Is the communication simplex or do we need duplex? What sort of update rate can we expect to see? Do we need to acknowledge connection (like TCP) or can we fire-and-forget (like UDP)? Will you only ever have 2 communicating modules, or might you want to add more down the line?
Defining the nature of the interactions will help you determine the appropriate technology to use. In the most basic model: one application writes data that can be read by another application, the easiest thing in the world is to have it write to a file that the other app(s) will read. This is a good, stable solution that works well in situations where race conditions are low-impact issues.
If you need duplex communications, things become slightly more complicated. For most purposes, a LocalDB instance can be used to share data between applications. This can work very well for bi-directional communication between applications, but, like the file method, it does not work for event-based interactions: it just shares flat data.
If the two processes need to talk, i.e. one needs to actively query the other, the level of complexity jumps considerably. You can query the other over the NIC using the standard WebAPI format and HttpClients, and that's likely to be your easiest solution. I know it sounds stupid, but having walked this path before, unless you're willing to spend some time architecting an IPC process, it is the easiest route. Another option is to have your process actively watch directories for changes and use the file approach, but there is a serious performance cost to implementing this solution.
The other option is to make use of an Inter-Process Communication (IPC) technology. In .NET that generally means using Pipes or Memory-Mapped Files. Both have benefits and pitfalls, but are best suited to event-driven processes and the communication of complex data.
Memory-Mapped files are the closest thing to what you're initially talking about. It's a section of memory that can be shared between multiple processes that has certain thread safety mechanisms that attempt to address concurrency problems. I can tell you from the start, there are complexities to using these that will surprise you, and I've not had enough success with these to advise you more about them.
Pipes are the IPC processor that I do know a bit more about, and they come in 2 flavors: named and anonymous. Named pipes provide a great mechanism for multiple processes to hook into a central bus and talk to one another. They're a good way to have an extensible system that future modules can hook into. Using them appropriately requires you to do a couple of things: first you must secure them appropriately because they are available via RPC over the network, and second is to develop an addressing and message-passing scheme for your modules.
Anonymous pipes are unidirectional (though 2 anon pipes used in conjucntion can provide full duplex communications) and work well when you have a pre-defined number of modules. The work in this case is moved from developing addressing/messaging over to initial establishment of the pipe.
There are other approaches, many encapsulated by WCF in fact, that are generally touched on here:
Interprocess Communications - Windows applications | Microsoft Docs
Anyway, have fun dipping into the joy that is IPC!
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor