I need to move a set of databases from a Windows 2008r2 server to a new Windows 2012 database server. I have looked at several articles but have not found the best one listing all of the complete steps in doing this the in a way that seems to have the complete process. Can someone direct me to a good article for me to follow in doing this?
I understand what your saying about it being very easy but the main issue is we are creating a 200Rr2 server from scratch and want to ensure everything is done right. This includes server setup, settings, etc.
Suppose you purchase a piece of server software and install it on a machine. Now, ideally I would like to tie it to the underlying machine, but it may not be a machine at all, it may be a VM.
Now, if you want to start up a second instance, I want to ensure that you are paying for that second instance.
Because of this, I need to find a way to distinguish them even though the underlying hardware may be the same.
I thought of using the PC name, but even that may be problematic because I can't be totally sure that *ALL* cloud vendors (hence, not just the VM vendors, but also their customers who, in turn, become sellers to my customers) will allow to change it.
I'm a freelance, junior sys-admin who is currently tasked with setting up the hardware for a tech startup. We're hosting a software service and need a system solution that makes sense within our budget and requirements.
The ideal goal would be n + 1 design, high security, long-term high availability without breaking the bank. Budget is ~6k for initial round of investment towards gear only. Additionally funds will be allocated for a year of quarter rack space in a datacenter local to my location.
I'm a bit over my head with my current knowledgebase, and intend to bridge the gap with a lot of pre-planning over the next 60 days. I figure 60 days before ordering any gear, 30 days to have it all come in, and get it initially configured at the datacenter, then 90 days to build out our hosting interface and properly test the system before going production. 6 months would be nice, but 8 months is being allocated for project.
Okay, so onto the gear and setup:
(2) Cisco SG-300-10 Switches, 1 wired & active, the second rack-mounted ready for wiring in case of failure.
Primary <> Secondary
With heartbeat in load-balancing + fail-over mode
Primary <> Secondary
Configured in PFsense for failover
A 4:1 ratio of active to failover App/Data Servers. I think failover can be configured with the Nginx Load-Balancer else PFsense.
Initial deployment is the two firewalls, two LBs, and 4+1 App/Data servers, with expectations to grow more App/Data as demand increases.
The service runs on LEMP stack. A master to master MySQL link between each A/D and the failover exists on separate partitions, separate instances of MySQL. It continuously synchs database with each active A/D, ready for activation in the event a LB declares an active dead. Higher resources in the form of more memory, higher thread count CPU, and n(A/D) hard disk space is allotted for failover.
Additionally, I plan to use an Anycast DDos prevention serviced by my colocation provider. I am wondering what drawbacks there are to my model.
The first 3-4 months is designing and deploying the system, the next 3-4 is linking the system to the software with the developers so that we can auto-provision services. I plan to utilize scripts heavily for this.
Should be easily do-able via tunneling. VPN on AC1 alone won't get your BS1 server a good route back to AS1. If it's a simple web service that doesn't require a lot of security, you could always give AS1 a real internet addressable address (from your ISP) and BS1 can access AS1 web service over the web. The thing about web services is that they typically aren't blocked by firewalls, but you still need to have an IP address facing the internet for AS1.
There is a good discussion on how to configure a reverse VPN tunnelling. Look like connecting back to the system is complicated issue. The essence of your problem is that even you can do the DNS registration (which makes eligible for the servers to find each other), the actual ip connection between your machines is further impossible.
It depends what you need or want from your server. And, btw, a third option "between" these two, is a Virtual Private Server (VPS) which, IMO, is as good a dedicated server (but much cheaper) unless your site is particularly large (or attracts lots of traffic) or resource heavy.
But the main advantage of a VPS or dedicated server is that you have full (virtual) control over the server, so can configure it as you want - obviously only an advantage if there is something particular you need to configure - and that you can also install and run your own programs (exe's) in the background to perform all sorts of related tasks.
I first thought of SNMP, but that's obviously wrong (you need to know the tcp/ip address before you can query a device about its abilities) ... SNMP does have a discovery process, but Im not sure how it works
(I used to have a discovery tool based on WNETEnum ... that fed the data to Visio to draw, but have log lost where I put it - it was a DDJ tool, and I cant recall if it did switches or just 'anything ip')
Check out SPLUNK while you're at it and see if it does discovery
Im not sure what your motivation is Richard - if you were being paid, and had to write a tool (that was cheaper than SolarWinds for example), Im sure you could gather a list of all the tcp/ip addresses on your network, and then reduce/filter the list perhaps (eg remove PC's, printers ...) , leaving a list of addresses you could then issue SNMP calls against (for instance) ... thats what I would do