Quantcast
Channel: High Availability (Clustering) forum
Viewing all 5648 articles
Browse latest View live

Prevent/Disable "Redirected access"

$
0
0

hi Guy,

Windows Server 2016, Failover cluster, fiber connect to storage, I get a problem that if fiber connection disconnected on node-1, the CSV storage auto enabled Redirected Access, instead of failover the hyper-V guest to node-2.

That seriously impact the VM guest performance.

It can't turn off redirected access in GUI, it need restart cluster service in both node.

May I ask how to prevent redirected access auto enabled/turn on upon fiber cable disconnected? I search in Internet can't find any solution that can prevent redirected access enable, it is very annoying.

Regards

Thanks...KEN


"microsoft hpc azure client" Self-signed certificate keeps auto-populating into the certificate store

$
0
0
Our security team is finding this certificate in the cert. store via a vulnerability scan.  However, the "microsoft hpc azure client" Self-signed certificate keeps auto-populating into the certificate store.  I've deleted this certificate (obviously replaced it with another) but it keeps coming back...like The Terminator movies.  How do I get rid of this certificate once and for all?  I have HPC 2016 pack 3 on Windows 2016.

NIC Teaming or MPIO for SAN in Windows Server 2019

$
0
0
I've been looking online on best practice. I have brand new Windows Server 2019 Datacenter installed on my host machine. I am trying to setup something new (in our environment) which is not new to the world I would like to provide high availability for our storage, to store VMs on a CSV. I have two separate 10 Gbps cards. My question is when I set this up and when setting up iSCSI, should I setup NIC Team in server manager for my storage network or should I have two separate cards with separate IPs and turn on MPIO? I am seeing articles about this but most articles are outdated and talk about Windows Server 2012 R2 when technology was not as mature as it is today.

Failover Cluster Manager 2012: Remote Desktop

$
0
0

Hey,

In Fail-over Cluster Manager there is funny little option called"Remote Desktop":

cluster remote desktop

Now in order to use it what additional steps one has to undertake(including firewall and stuff)?

Thanks a lot!

Virtual Network Name Accounts do not update OS version in Active Directory after Windows upgrade

$
0
0
I have a number of Failover Clusters that were originally created with Windows Server 2016 but have since been upgraded to 2019. The nodes themselves show up in Active Directory as Windows Server 2019 machines, but the Virtual Cluster Network Name Accounts still say they are Windows Server 2016 even though the cluster functional level has been upgraded to version 10 (2019). This screws with my inventory. Is there any way to force the cluster to update that information somehow? 

Configuration of "Remote-updating mode" in Cluster-Aware Updating (Windows Server 2012r2)

$
0
0

Hi,

I'm looking for exact steps (point 1-x) regarding Configuration of Remote-updating mode in Cluster-Aware Updating. There are extra steps needed since this one involves connecting from remote PC to a cluster with CAU. Those options are needed when setting up Self-updating mode:

remote update for cau

Microsoft covered Self-updating mode pretty well, but Remote-updating mode is just little here and little there:

"Remote-updating mode For this mode, a remote computer, which is called an Update Coordinator, is configured with the CAU tools. The Update Coordinator is not a member of the cluster that is updated during the Updating Run. From the remote computer, the administrator triggers an on-demand Updating Run by using a default or custom Updating Run profile. Remote-updating mode is useful for monitoring real-time progress during the Updating Run, and for clusters that are running on Server Core installations."

"You must install the Failover Clustering Tools as follows to support the different CAU updating modes:

  • To use CAU in self-updating mode, install the Failover Clustering Tools on each cluster node.

  • To enable remote-updating mode, install the Failover Clustering Tools on a computer that has network connectivity to the failover cluster."

https://docs.microsoft.com/en-us/windows-server/failover-clustering/cluster-aware-updating

"Remote-updating Enables you to start an Updating Run at any time from a computer running Windows or Windows Server. You can start an Updating run through the Cluster-Aware Updating window or by using theInvoke-CauRun PowerShell cmdlet. Remote-updating is the default updating mode for CAU. You can use Task Scheduler to run the Invoke-CauRun cmdlet on a desired schedule from a remote computer that is not one of the cluster nodes."

https://docs.microsoft.com/en-us/windows-server/failover-clustering/cluster-aware-updating-faq

This is some partial info. I'd need step by step configuration plus what  you exactly do onremote PC vs what you do on a cluster with CAU. Firewall, remote management etc...

Thanks for your time!

Windows Server 2012r2: Adding a resource in Failover Cluster Manager

$
0
0

Hi,

add a resource

In WS 2008r2 this is the way one can add a resource. How do you do this in Windows Server 2012r2?Is this the way:

How do they differ(2008r2 vs 2012/r2)?

Thanks!

VM rejected move due to error 0x320033

$
0
0

Hi all,

Recently noticed that i have a few VMs on failover cluster (V 10) on windows 2016 DC occasionally having issues when it comes to live migration. Below is the error msg. Been trying to find more details but googling has not helped to find a error code 0x320033.

"Cluster resource 'Virtual Machine SCOMDWDB01' in clustered role 'SCOMDWDB01' rejected a move request to node 'stalk'. The error code was '0x320033'.  Cluster resource 'Virtual Machine SCOMDWDB01' may be busy or in a state where it cannot be moved.  The cluster service may automatically retry the move."

No issue with NUMA and failover cluster network is ok.

1) Moving the VMs around when it is shut down works fine.
2) Doing a quick migration will always works (i guess as it does shut the vm), after which i can do live migration again till the issue above happens.


Failover Cluster Manager: High Availability Wizard: "Other Server"

$
0
0

Hi,

High Availability Wizard (in Fail-over Cluster Manager) has those roles(2012r2):

High Availability Wizard

I'm trying to familiarize myself with all of them. While some are obvious (file server, vm, dhcp server, dfs namespace server...) others are not so obvious. Can somebody briefly tell me what are these for:

  • "Other Server"
  • "DTC"
  • "Message Queuing"

Another question would be if I want to cluster AD CS, AD FS, AD RMS would I use "generic service"? How exactly do it? I guess generic app would be for some program. Can anybody shed some light on this topic?

Thank you!

Network Load Balancing- different subnets/cross subnet

$
0
0

Hi,

Network Load Balancing is supposed to be in 1 subnet. Is there a way to configure it for another subnet? Adding another cluster with different IP? Can somebody explain this one?

Network Load Balancing:"Remove from view"option

$
0
0

Hello,

In NLB- what is option :"remove from view" for?

What does this actually do?

2016 Server Hyper-V Reverts to .XML files after joining a cluster

$
0
0

Has anyone else noticed a behavior in Windows Server 2016 that it reverts to using the old format .XML files for VM configurations after joining a cluster? In this case the cluster was 2012R2 functional level which may affect it?

The problem we're having is that we had some local VMs on the machines and as soon as the servers joined the cluster, the running machines disappeared from all management. They are still running strangely enough, but they no longer show in Get-VM or in Hyper-V manager. 

So, I RDP into one of them and shut it down and tried to re-import it, but 2016 would not even let me re-import it with the VMCX configuration files, it said 'No Virtual Machines Found' in that folder. I had to re-create it, attached the VHDX and it created the old style XML files.

I'm wondering if maybe this has to do with the functional level, but all the VMs on the cluster have XML files, even ones created with 2016, so I'm thinking it might just be intentional?

Anyone seen this behavior?

Thanks!

Adding a file share witness in a ( two nodes ) WSFC ( without AD ) ( without DNS server )

$
0
0

Hi 

I have create a WSFC (without AD ) ( without DNS server )   , which only has two nodes .
I want to add a file share as a witness ,  I get the error.

Witness Type:                    File Share Witness
Witness Resource:               \\192.168.251.17\share
Cluster Managed Voting:     Enabled
Started                                 12/10/2019 2:14:30 PM
Completed                           12/10/2019 2:14:30 PM

Could not grant the cluster access to the file share '\\192.168.251.17\share'.
There was an error granting the cluster access to the selected file share '\\192.168.251.17\share'.
Failed to grant permissions for the cluster 'mycluster' to access the share 'share'.
An error occurred looking up the security ID of the cluster name object for 'mycluster'.
No mapping between account names and security IDs was done

The network share  '\\192.168.251.17\share' is on the third node (not in the cluster).
I find some articles through the internet , which say , it is impossible to config a file share witness in a  a ( two nodes ) WSFC ( without AD ) .

My question is that : is it true ?
if not , if something I miss ?
Thank you !



s2d validation report fails: verify unique closure identifiers fails

$
0
0

dears,

i'm setting up a 4 nodes s2d cluster on windows server 2019.

the test validation reports is failing only on the s2d section of verify unique enclosure identifiers.

enclosure connected to  node a has the same unique indentifer enclosure connected to node  b.

howevver, in my solution the storage is hp synergy. and it is just one enclosure, i dont have more than one enclosure therefore one id for it.

yet the test is failing and i cant create the cluster.

can you advise on that please

thank you in advance

S2D node scaling and resiliency

$
0
0

Hi,

Microsoft's documentation states "with 5 and above nodes you can survive 2 simultaneous node failures".

That "2 simultaneous node failures" is not exhaustive is it. So:

7 nodes could survive 3 failures (4 remaining votes)
9 nodes could survive 4 failures (5 remaining votes)
15 nodes could survive 7 failures (8 remaining votes)

etc. As long as there are more votes than not quorum is maintained, both for cluster and pool.

Thanks


Changing lun replication target in strectch cluster

$
0
0

We currently have a 4 node windows stretch cluster.  We have two nodes in one data center and two nodes in another data center.  Right now, all SAN storage is in the same data center on two different storage controllers.  What we want to do is add a new LUN from the second data center to the existing cluster and have the replication target change to this new LUN, then remove the original LUN from the nodes in the second data center, so that we have one replication between the data centers for redundancy.

What wold be the best practice for this? My thoughts are as follows:

1. Add new lun to both nodes in DC2.

2. Format the LUN  on node 1 in DC2.

3. Stop replication on the existing share.

4. Set the new lun up as the new replication target.

5. Take the original lun and bring it offline and remove it from the host.

The initial block copy should move all the data over from the main share in DC1, to the new LUN in DC2 I would think.

We do not want to change the orginal share name, or recreate it, because Komprise is using it for arching off of the NetApp.

Is it possible to create failover cluster of two IIS servers?

$
0
0

Hello,

I have IIS server (server1.local) on Site1 (192.168.1.0/24)

Now we have also Site2 (192.168.2.0/24) with IIS server (server2.local).

Is it possible to create failover cluster of two IIS servers? One of the servers should work if another site is unavailable. Сhanges on one server must be replicated to another server.

Сlients access the server by DNS-name. How it will work in case of cluster?

How best to get system out of current "Removing From Pool" and "Repair" states?

$
0
0

Basic Context:

  • Windows Server 2019 Datacenter
  • 2-node Hyper-V cluster
  • Storage Spaces Direct
  • 2x Dell PowerEdge R740
  • 2x QLogic FastLinQ 41262 Dual Port 25Gb SFP28 Adapter
  • 2x 2x 800GB SSD SAS
  • 2x 10x 1.2TB 10K SAS
  • File Share Witness on NAS

Summary:

Two days ago, I discovered that the cluster group and IP address were offline and could not be enabled. Hoping to resolve the problem with a reboot of the nodes, I unfortunately went through the wrong steps for shutting down the VMs and restarting the cluster. When they restarted, the failover cluster on the nodes could no longer communicate, and much appeared broken. (It turns out that it was probably a simple matter of fixing broken firewall rules, but I did not discover this until later.) My colleague and followed some guidance that, it turned out later, did not apply to our problem (i.e. remove the drives marked "Communication Lost"), and I find myself with a degraded cluster. I don't wish to make any more missteps in bringing it back to a healthy state, so I turn to the wiser community for some guidance.

Here is how things stand, according to the commands I've seen recommended—I have an image of this, but I can't post it until my account is verified:

  • The command Get-PhysicalDisk shows the 12 drives in the node currently owning the cluster inOperationalStatus "OK" and HealthStatus "Healthy". The 12 drives on the once-disconnected node haveOperationalStatus"Removing From Pool, OK" and HealthStatus"Healthy".
  • The command Get-VirtualDisk shows the OperationalStatus of our four volumes as "Degraded" and theHealthStatus as "Warning".
  • The command Get-StorageJob shows each volume with "-Repair" appended to the name,IsBackgroundTask appears to fluctuate between "True" and "False", andJobState shows as "Suspended" for all four repair jobs. The elapsed time progresses, but there is no progress.

As you can hopefully see, there are a few things that are going on here, or could/should be. There's the removal of the once-disconnected node's drives from the clustered storage, and there's the repair of the cluster volumes. But I don't see progress on either front.

What do you think is the best way to proceed from this state? What commands will put things on the right track, and in which order should they be issued?

Many thanks in advance for considering my problem and contributing to the solution!


Failover Cluster Manager: "Close Connection"

$
0
0

Hi,

I wonder what will happen if you just press "Close Connection" button?

close connection cluster

Thanks

Can only live migrate VMs started on certain hosts

$
0
0

I inherited support of a 2016 Hyper-V cluster of four hosts, using a SAN for storage. Given the number of VMs and their CPU/memory requirements, it was obvious that the cluster needed to grow.

Stood up the new host, matched up permissions, set up constrained delegation, allowed unencrypted WinRM/WMI, added to cluster.

If I build a VM on the new node and start it, I'm unable to live migrate it to the other nodes. I get the "processor-specific features" message despite having ticked the CPU compatibility checkbox.

However, if I shut that VM down, quick migrate it to another node and turn it back on, I can then live migrate it back and forth between the new node and the old nodes all day long.

Can someone suggest where to look to straighten this out? Thanks in advance for any help!

CPU DETAILS:

As the new host is, well, newer than the preexisting hosts, the CPUs are newer.

Host 1 - Intel64 Family 6 Model 79 Stepping 1

Hosts 2-4 - Intel64 Family 6 Model 62 Stepping 4

Host 5 - Intel64 Family 6 Model 85 Stepping 7

Viewing all 5648 articles
Browse latest View live