Quantcast
Channel: High Availability (Clustering) forum
Viewing all 5648 articles
Browse latest View live

P2V of Windows 2008 R2 SQL clustered servers

$
0
0

Hi

We are planning to do P2V for existing Windows 2008 R2 / SQL 2008 clusters hosts (In test lab we did it successfully). We concerned about a few points:

a) In case of failure / application issues we will revert to the existing production servers, in that case if the trust failed for pohysical servers then we can re add those hosts again to domain, but in case of cluster name trust failure, how can we correct it?

b) is there any known issues / precauations to be takes before converting clustered physical hosts to VM?

Thanks in advance

  


LMS


Failover Cluster Manager - Network Not Showing

$
0
0

I have two team network setup. One is called MGMT and the other is called DATA.

In Virtual Switch Manager I have added an external network that uses the DATA network.

The only network that is showing in Failobver Clustering is the MGMT team and not the DATA team.  Any ideas?

Thanks

Multisite clustering and AGs

$
0
0

Hi all,

I have been researching WIN 2016 features and new AGs enhancements lately. Need your guidance on the following requirement and proposed hypothesis for my virtual lab purpose. I'd like to ensure that my interpretation of AG concepts is accurate. Appreciate further comments and alternate approach if my hypothesis is not applicable.

OS: Win2016; SQL; 2016 EE

Requirement:

Let's assume I have 4 nodes(N1,N2,N3,N4), 2 nodes in Location A (site 1) and other 2 in location B (site 2). App1 is accessing (A,B,C) databases in site1 for all their operations and App2 is accessing (X,Y,Z) databases in site 2. In my case, all sites act as primary to their business apps. All write operations should be independent to each site whereas reads could be load-balanced. 

Following are my proposals based on my understanding about FCI/AGs and Strech Clusters

-- Using FCI and AGs

1) Create single WSFC between 4 nodes 

2) Create and configure FCI-1 on nodes N1 and N2

3) Create and configures FCI-2 on nodes N3 and N4

4) Create AG-1 and AG listener from site1 to site 2 on databases (A,B,C) - readable secondaries (async or sync)

5) Similarly AG-2 and listener on X,Y,Z databases from site2 to site1 

Hypothesis: It has HA between N1 and N2 within site1 and at the same time I have secondary replicas pointing to site2. I could also setup Readable Routing list (FCI-2) to make all reads load- balanced across two FCIs. Vice-versa applies to another set of databases from site2 to site1.

Harsha




What are the Hosts Impact by increasing Cluster Node Number

$
0
0

hi,

We are doing some interop testing between Cluster and storages. In some of the cases 3-node cluster is more likely to trigger the issue than 2-node cluster on host side. I'd like to ask from Windows cluster's point view, what functions/component could be impacted by different node numbers, or the node number does not matter at all. Thanks!

Failover Cluster volume inaccessible, showing GUID not volume

$
0
0

We have a file cluster on Server 2012 R2 (Fully updated) with a RDM passthrough disk to a Dell Compellent SAN using VMWare ESXi 5.5. Pathing is set to MRU for the RDM. 

We lost access to the cluster volume and in failover cluster manager where it would normally display the drive letter of the volume with name, it was showing the GUID and reporting 'Unknown'. Failing over to the other node resolved this. The VMWare VM/Host logs show no drops in connectivity nor does the SAN report an issue. This a fibre channel SAN, not iSCSI.

The logs at the time on the cluster node that was active say the following:

[RES] Physical Disk: Failed to open device \Device\Harddisk2\ClusterPartition1, status 0xc0000034
[RES] Physical Disk: HarddiskpIsPartitionHidden: failed to open device \Device\Harddisk2\ClusterPartition1, status 2
[RES] Physical Disk: HarddiskpIsPartitionHidden: failed to open device \Device\Harddisk2\ClusterPartition1, status 2
[RES] Physical Disk: HarddiskpIsPartitionHidden: failed to open device \Device\Harddisk2\ClusterPartition1, status 2
[RES] Physical Disk: HardDiskpGetDiskInfo: GetVolumeInformation failed for \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition1\, status 3
[RES] Physical Disk: HardDiskpGetDiskInfo: failed to get partition size for \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition1\, status 3

We're struggling to find reasoning why this happened. Can anyone provide possible causes or where to look that may lead us to an explanation?

Thank you in advance.

EventID 10028: DCOM was unable to communicate with the computer X using any of the configured protocols

$
0
0

I have many of those errors in the event view:

DCOM was unable to communicate with the computer X using any of the configured protocols

And it is correct, because the computer X doesn't exist any more, but I don't know what is trying to connect to computer X.
How can I find out which application/service is trying to connect to the computer X?

<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
- <System><Provider Name="Microsoft-Windows-DistributedCOM" Guid="{1B562E86-B7AA-4131-BADC-B6F3A001407E}" EventSourceName="DCOM" /><EventID Qualifiers="0">10028</EventID><Version>0</Version><Level>2</Level><Task>0</Task><Opcode>0</Opcode><Keywords>0x8080000000000000</Keywords><TimeCreated SystemTime="2016-12-08T08:23:11.475435300Z" /><EventRecordID>117706</EventRecordID><Correlation /><Execution ProcessID="748" ThreadID="2836" /><Channel>System</Channel><Computer>Server</Computer><Security UserID="S-1-5-21-3416440569-1641309572-3638930203-1105" /></System>
- <EventData><Data Name="param1">X</Data><Data Name="param2">126c</Data><Data Name="param3">C:\Windows\system32\ServerManager.exe</Data></EventData></Event>

The process id 748 is the service "Remote Procedure Call", which doesn't help my further because I don't know what is using it. How can I trace it back and find the origin of the problem?

Thanks



Windows Failover cluster across datacenter. Node showing down with network unavailable

$
0
0

hi Team,

We have 3 node windows cluster running SQL FCI. 2 node are at primary DC and 1 node is at DR . I am observing that node 3 is started showing as down in cluster manager and its networks as 'unavailable'. 2 nodes in primary are up and running with SQL FCI.

while running Cluster validation on node 3, i am observing that even IP configuration validation , detecting update level fails though node1 and node 2 are listed. Pls. find error below. not able to figure out what is going on with the cluster and showing node 3 down. Any pointers will be appreciated. Thanks

 Regards,

OPS Microsoft Server Medium: The system detected an address conflict for IP address 10.x.x.x with the system having network hardware address 00-xx-56-xx-xx-xx. Network operations on this system may be disrupted as a result.

$
0
0

Hi All,

Whenever my cluster resource move from one Node to other I am getting few error/warning message. Can anyone help me to understand why?

Cluster 2012R2
Witness File Share

Application Cluster: SQL Always On

Cluster resource 'ALXXXX_172.xx.xx.xx' of type 'IP Address' in clustered role 'ALxxxx' failed.
Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.

Cluster IP address resource 'ALxxxx_172.xx.xxx.xx' cannot be brought online because a duplicate IP address '172.xx.xx.xx' was detected on the network.  Please ensure all IP addresses are unique.

Cluster node 'Node04' was removed from the active failover cluster membership. The Cluster service on this node may have stopped. This could also be due to the node having lost communication with other active nodes in the failover cluster. Run the Validate a Configuration wizard to check your network configuration. If the condition persists, check for hardware or software errors related to the network adapters on this node. Also check for failures in any other network components to which the node is connected such as hubs, switches, or bridges.

The Cluster service is shutting down because quorum was lost. This could be due to the loss of network connectivity between some or all nodes in the cluster, or a failover of the witness disk.
Run the Validate a Configuration wizard to check your network configuration. If the condition persists, check for hardware or software errors related to the network adapter. Also check for failures in any other network components to which the node is connected such as hubs, switches, or bridges.

File share witness resource 'File Share Witness' failed to arbitrate for the file share '\\Filer006\QA_xxx_xxx'. Please ensure that file share '\\Filer006\QA_xxx_xxx' exists and is accessible by the cluster.

The cluster Resource Hosting Subsystem (RHS) process was terminated and will be restarted. This is typically associated with cluster health detection and recovery of a resource. Refer to the System event log to determine which resource and resource DLL is causing the issue.

I checked cluster logs also but not able to find anything here, My cluster is working fine even after getting this error/warning.


Network Traffic send through Cluster IP

$
0
0
I configured fail over cluster in windows 2012 r2  server. It working fine.But if i send network traffic (ex: telnet) from cluster server to client, it will take Physical IP address and communicate. But i want to send network traffic through cluster IP. Plz help how to configure.

Clustered Scheduled Tasks - Problems viewing tasks and how to start manually using powershell.

$
0
0

Hi,

I'm looking at running af 2-node failover-cluster in Server 2012 and would like to make use of the new cluster scheduled tasks ability.
Reading through a lot of guides online, i'm onboard with managing the tasks from powershell. So far i created tasks and bound them to a ressource.

But all the guides mention, that tasks should be visible in the Task Scheduler snap-in for MMC. I just cant find them.
I've checked the Failover Cluster section on both nodes (locally and on the role name) and the tasks is nowhere to be found.

What am i doing wrong?

Also. anyone knows how to manually start a scheduled task from powershell?

# This schedules a tasks...
# The batch file pipes text to a file

$action = New-ScheduledTaskAction –Exec E:\Jobs\test\Runme.bat
$trigger = New-ScheduledTaskTrigger –At 10:13 –Daily
$trigger.RepetitionInterval = (New-TimeSpan -Minutes 5)
$trigger.RepetitionDuration = (New-TimeSpan -Days 1)
Register-ClusteredScheduledTask –Cluster “srv2012cl1n1” –TaskName kasper-test –TaskType ResourceSpecific –Resource “Cluster Disk 1” –Action $action –Trigger $trigger


Kasper

Create cluster with SOFS

$
0
0

I need create a hyper-v cluster on this hardware:

1) Host 1

2) Host 2

3) Fileserver

4) SAN

5) Switch 1 Gb/s

6) Windows server 2012 R2 standard - 3 License

Scale-out File server approach for the files server, but we don't have VMM license.

Without VMM can'not add storage to the Hyper-V cluster.

__________________________________________________________

What is the best approach in my situation?

How to create Client Access Point using powershell

$
0
0

Hello Everyone,

Please point me a document or script on creating  a client access point with IP Address in WSFC  using power shell. 

Regards

Sufian


Mohd Sufian www.sqlship.wordpress.com Please mark the post as Answered if it helped.

EventID 5774: The dynamic registration of the DNS record 'X' failed on the following DNS server

$
0
0

I have many of those errors in the event viewer and I can understand also because it fails:

The dynamic registration of the DNS record 'X' failed on the following DNS server:

DNS server IP address: 164.128.36.34
Returned Response Code (RCODE): 5
Returned Status Code: 9017  
There is no DNS server with the IP 164.128.36.34! From where is getting this IP? The DNS server is running locally on the same machine!

[2012 R2 NLB] http clients see 1 timeout when NLB node is stopped

$
0
0

Windows Server 2012 R2 Network Load Balancing

Test Environment Configuration:

  • 1 Windows Server 2016 Hyper-V host
  • 2 Windows Server 2012 R2 VM's acting as the 2 NLB cluster nodes
  • each node has a management NIC and a dedicated NLB NIC
  • NLB cluster operation is Unicast
  • NLB cluster IPv4 address has a static A record in DNS
  • each VM config has MAC spoofing enabled for the NLB NIC; I presume if I statically assigned the NLB Cluster's MAC to the NLB vNIC in each VM's config it would work without spoofing enabled - right?
  • inside each VM, the NLB adapter is configured to avoid registering with DNS and has no gateway or DNS servers defined
  • Sole Port rule is port 80 only, tcp/udp, no affinity - I'm simply serving up a file

When I http://nlb.contoso.com/somefile, I get to one node and subsequent page refreshes hit the same node. If I stop that node in NLB Manager, then refresh the page on the client, the page refresh times out. On the next refresh, the page is loaded from the remaining NLB node. Why? I had the impression that as soon as I stop node1, refreshing the page on the client should immediately get served up from the still-running node2.


born to learn!

Question about dynamic witness with Windows Server 2012 R2

$
0
0

Hi folks,

Happy new year first :-)

I just have a question concerning the new behavior of the witness provided by Windows Server 2012 R2.

According to the Microsoft documentation I can read: The quorum witness vote is also dynamically adjusted based on the state of the witness resource. If the witness is offfline or failed, the cluster sets the witness vote to "0".

However in my lab environment (a WSFC with two nodes and a fileshare witness) I noticed that if the FSW is in failed state, the property WitnessDynamicWeight is still equal to 1. From my understanding this value would be changed to 0 in this case.

Name       State       NodeWeight      DynamicWeight

Node1      Up                          1                            1

Node2      Up                          1                            1

Name       DynamicQuorum     WitnessDynamicWeight

winclust                          1                                     1 



If I try to shutdown one of my cluster node, the quorum is lost my case

Any thoughts?

Thanks in advance





Cannot create checkpoint when shared vhdset (.vhds) is used by VM - 'not part of a checkpoint collection' error

$
0
0

We are trying to deploy 'guest cluster' scenario over HyperV with shared disks set over SOFS. By design .vhds format should fully support backup feature.

All machines (HyperV, guest, SOFS) are installed with Windows Server 2016 Datacenter. Two HyperV virtual machines are configured to use shared disk in .vhds format (located on SOFS cluster formed of two nodes). SOFS cluster has a share configured for applications and HyperV uses \\sofs_server\share_name\disk.vhds path to SOFS remote storage). Guest cluster is configured with 'File server' role and 'Failover clustering' feature to form a guest cluster. There are two disks configured on each of guest cluster nodes: 1 - private system disk in .vhdx format (OS) and 2 - shared .vhds disk on SOFS.

While trying to make a checkpoint for guest machine, I get following error:

Cannot take checkpoint for 'guest-cluster-node0' because one or more sharable VHDX are attached and this is not part of a checkpoint collection.

Production checkpoints are enabled for VM + 'Create standard checkpoint if it's not possible to create a production checkpoint' option is set. All integration services (including backup) are enabled for VM.

When I delete .vhds disk of shared drive from SCSI controller of VM, checkpoints are created normally (for private OS disk).

It is not clear what is 'checkpoint collection' and how to add shared .vhds disk to this collection. Please advise.

Thanks.

Server 2012 NFS - 10 minutes to resume after failover?

$
0
0

I've got a Server 2012 Core cluster running an HA file server role with the new NFS service. The role has two associated clustered disks. When I fail the role between nodes, it takes 10-12 minutes for the NFS service to come back online - it'll sit in 'Online Pending' for several minutes, then transition to 'Failed', then finally come 'Online'. I've looked at the NFS event logs from the time period of failover and they look slightly odd. For example, in a failover at 18:41, I see this in the Admin log:

TimeEventIDDescription
18:41:591076Server for NFS successfully started virtual server {fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}
18:50:472000A new NFS share was created. Path:Y:\msd_build, Alias:msd_build, ShareFlags:0xC0AE00, Encoding:7, SecurityFlavorFlags:0x2, UnmappedUid:4294967294, UnmappedGid:4294967294
18:50:472000A new NFS share was created. Path:Z:\eas_build, Alias:eas_build, ShareFlags:0xC0AE00, Encoding:7, SecurityFlavorFlags:0x2, UnmappedUid:4294967294, UnmappedGid:4294967294
18:50:472002A previously shared NFS folder was unshared. Path:Y:\msd_build, Alias:msd_build
18:50:472002A previously shared NFS folder was unshared. Path:Z:\eas_build, Alias:eas_build
18:50:471078NFS virtual server {fc4bf5c0-c2c9-430f-8c44-4220ff6655bd} is stopped
18:51:471076Server for NFS successfully started virtual server {fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}
18:51:472000A new NFS share was created. Path:Y:\msd_build, Alias:msd_build, ShareFlags:0xC0AE00, Encoding:7, SecurityFlavorFlags:0x2, UnmappedUid:4294967294, UnmappedGid:4294967294
18:51:472000A new NFS share was created. Path:Z:\eas_build, Alias:eas_build, ShareFlags:0xC0AE00, Encoding:7, SecurityFlavorFlags:0x2, UnmappedUid:4294967294, UnmappedGid:4294967294

In the Operational log, I see this:

TimeEventIDDescription
18:41:511108Server for NFS received an arrival notification for volume \Device\HarddiskVolume11.
18:41:511079NFS virtual server successfully created volume \Device\HarddiskVolume11 (ResolvedPath \Device\HarddiskVolume11\, VolumeId {69d0efca-c067-11e1-bbc5-005056925169}).
18:41:581108Server for NFS received an arrival notification for volume \Device\HarddiskVolume9.
18:41:581079NFS virtual server successfully created volume \Device\HarddiskVolume9 (ResolvedPath \Device\HarddiskVolume9\, VolumeId {c5014a4a-d0b8-11e1-bbcb-005056925167}).
18:41:591079NFS virtual server successfully created volume \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\ (ResolvedPath \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\, VolumeId {fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}).
18:41:591105Server for NFS started volume \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\ (ResolvedPath \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\, VolumeId {fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}).
18:41:591079NFS virtual server successfully created volume \DosDevices\Y:\ (ResolvedPath \Device\HarddiskVolume9\, VolumeId {c5014a4a-d0b8-11e1-bbcb-005056925167}).
18:44:061116Server for NFS discovered volume Z: (ResolvedPath \Device\HarddiskVolume11\, VolumeId {69d0efca-c067-11e1-bbc5-005056925169}) and added it to the known volume table.
18:50:471116Server for NFS discovered volume Y: (ResolvedPath \Device\HarddiskVolume9\, VolumeId {c5014a4a-d0b8-11e1-bbcb-005056925167}) and added it to the known volume table.
18:50:471081NFS virtual server successfully destroyed volume \DosDevices\Y:\.
18:50:471105Server for NFS started volume Y: (ResolvedPath \Device\HarddiskVolume9\, VolumeId {c5014a4a-d0b8-11e1-bbcb-005056925167}).
18:50:471105Server for NFS started volume Z: (ResolvedPath \Device\HarddiskVolume11\, VolumeId {69d0efca-c067-11e1-bbc5-005056925169}).
18:50:481106Server for NFS stopped volume \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\ (ResolvedPath \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\, VolumeId {fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}).
18:50:481081NFS virtual server successfully destroyed volume \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\.
18:51:471079NFS virtual server successfully created volume \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\ (ResolvedPath \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\, VolumeId {fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}).
18:51:471105Server for NFS started volume \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\ (ResolvedPath \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\, VolumeId {fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}).

From this, I'm not sure what's going on between 18:41:59 and 18:44:06, between 18:44:06 and 18:50:47 or between 18:50:48 and 18:51:47. What's the NFS volume discovery doing and why does it take so long?

Does anyone have any thoughts as to where I could start looking to work out what's happening here? Is there any tracing that can be enabled for the NFS services to indicate what's going on?

Thanks in advance!

2012 R2 hangs on 'Forming Cluster' or error 1460 from cluster.exe

$
0
0

I am trying to build a 2 node cluster however it hangs on 'Forming Cluster' in the wizard.  If I try via the cluster.exe command, it gets to '52% Forming Cluster' then errors with 1460 which I believe is a time out.  The two nodes are VM's on HyperV 2012 R2.  I have about 4 other 2 node clusters working successfully.  I have also created this one successfully once, however we had to implement IPSec for communications to other subnets.  Both nodes are in the same subnet however.  Once the IPSec was enabled, the cluster failed.  Since then, the VM's have been completely rebuilt but still unable to form cluster.  Possibly an AD issue? Permissions seem fine, cluster object gets create fine.  Here is the end of the cluster log. Thanks for any ideas!!


00000128.00000370::2014/09/22-12:20:40.657 INFO  </ACL>
00000128.00000370::2014/09/22-12:20:40.657 INFO  [DCM] Filter.SetMDSSecurityDescriptor(Sequence 1, Length=332)
00000128.00000370::2014/09/22-12:20:40.657 INFO  [DCM] Launching CsvFs Listener
00000128.00000370::2014/09/22-12:20:40.657 INFO  [DCM] Launching CsvFlt Listener
00000128.00000960::2014/09/22-12:20:40.657 INFO  [DCM] CsvFs Listener: ping 10808
00000128.0000095c::2014/09/22-12:20:40.657 INFO  [DCM] Opened CsvFlt event port: handle HDL( 770 )
00000128.00000370::2014/09/22-12:20:40.657 INFO  [DCM] Launching Nflt Listener
00000128.00000938::2014/09/22-12:20:40.657 INFO  [DCM] Opened NFlt event port: handle HDL( 778 )
00000128.00000370::2014/09/22-12:20:40.657 INFO  [DCM] Filter.SetSecurityInfo (Sequence 2, NodeId=1, GlobalSequenceNumber=1, KeyBlobSize=0)
00000128.00000370::2014/09/22-12:20:40.657 INFO  [DCM] SetSecurityInfo message sent
00000128.0000099c::2014/09/22-12:20:40.657 INFO  [CLI] Generating key
00000128.0000099c::2014/09/22-12:20:40.657 INFO  [CLI] Successfully initialized key
00000128.0000099c::2014/09/22-12:20:40.657 INFO  [CLI] ExportState - Initializing blob from local registry
00000128.0000099c::2014/09/22-12:20:40.657 INFO  [CLI] On first run
00000128.0000099c::2014/09/22-12:20:40.657 INFO  [CLI] Writing last update time to local registry 130558620406570472
00000128.0000099c::2014/09/22-12:20:40.657 INFO  [CLI] Reading account parameters from local registry
00000128.0000099c::2014/09/22-12:20:40.657 INFO  [CLI] Creating account if needed
00000128.0000099c::2014/09/22-12:20:40.657 INFO  [CLI] Configuring local account
00000204.00000308::2014/09/22-12:20:40.703 INFO  [CAM] CAMTranslateNameToSID - Looking up local name
00000204.00000308::2014/09/22-12:20:40.719 INFO  [CAM] CAMTranslateNameToSID - Finished looking up local name
00000128.0000099c::2014/09/22-12:20:40.719 INFO  [CLI] Account Created
00000128.0000099c::2014/09/22-12:20:40.736 INFO  [CLI] Users group set
00000128.0000099c::2014/09/22-12:20:40.736 INFO  [CLI] Flags set, account configured
00000128.0000099c::2014/09/22-12:20:40.736 INFO  [CLI] Initializing security
00000128.0000099c::2014/09/22-12:20:40.736 INFO  [CLI] Notifying credentials to CAM, creation flags 0, control flags 7
00000204.00000308::2014/09/22-12:20:40.736 INFO  [CAM] CamApCallPackage
00000204.00000308::2014/09/22-12:20:40.736 INFO  [CAM] CallInfo: Proc 296 Thread 2460 Count 0 Att 512
00000204.00000308::2014/09/22-12:20:40.736 INFO  [CAM] ClientInfo: Logon 999 Proc 296 Thread 2460 TCB 1 Impersonating 1 Restrict 0 Flags 0
00000204.00000308::2014/09/22-12:20:40.736 INFO  [CAM] SetCNOCred 296 14 16 30 0 7
00000204.00000308::2014/09/22-12:20:40.736 INFO  [CAM] Setting CurrentUser CLIUSR, Dom FS01-VM (Proc 296)
00000204.00000308::2014/09/22-12:20:40.736 INFO  [CAM] New Process, old 0
00000204.00000308::2014/09/22-12:20:40.736 INFO  [CAM] Creating new token when CNL credentials are set
00000204.00000308::2014/09/22-12:20:40.761 INFO  [CAM] LsaLogon: c000015b
00000204.00000308::2014/09/22-12:20:40.761 ERR   [CAM] Error in creating first token: -1073741477
00000204.00000308::2014/09/22-12:20:40.761 INFO  [CAM] Obtaining current CNL SID
00000204.00000308::2014/09/22-12:20:40.761 INFO  [CAM] CAMTranslateNameToSID - Looking up local name
00000204.00000308::2014/09/22-12:20:40.761 INFO  [CAM] CAMTranslateNameToSID - Finished looking up local name
00000128.0000099c::2014/09/22-12:20:40.761 INFO  [CLI] LsaCallAuthenticationPackage: -1073741477, 0 size: 0, buffer: HDL( 0 )
00000128.0000099c::2014/09/22-12:20:40.854 INFO  [CLI] Credentials Failed to notify CAM
00000128.0000099c::2014/09/22-12:20:40.854 INFO  [CLI] Initializing token
00000204.00000308::2014/09/22-12:20:40.854 INFO  [CAM] CamApCallPackage
00000204.00000308::2014/09/22-12:20:40.854 INFO  [CAM] CallInfo: Proc 296 Thread 2460 Count 0 Att 512
00000204.00000308::2014/09/22-12:20:40.854 INFO  [CAM] ClientInfo: Logon 999 Proc 296 Thread 2460 TCB 1 Impersonating 1 Restrict 0 Flags 0
00000204.00000308::2014/09/22-12:20:40.854 INFO  [CAM] GetCNO forceNew=0
00000204.00000308::2014/09/22-12:20:40.854 INFO  [CAM] GetCNOToken: LUID 0:0, token: e32bea00, DuplicateHandle: c0000008
00000128.0000099c::2014/09/22-12:20:40.854 INFO  [CLI] LsaCallAuthenticationPackage: -1073741816, 0 size: 0, buffer: HDL( 0 )
00000128.0000099c::2014/09/22-12:20:40.873 ERR   mscs::QuorumAgent::FormLeaderWorker::operator (): (c0000008)' because of 'status'
00000be8.00000ae0::2014/09/22-12:23:37.527 DBG   Cluster node cleanup thread started.
00000be8.00000ae0::2014/09/22-12:23:37.527 DBG   Starting cluster node cleanup...
00000be8.00000ae0::2014/09/22-12:23:37.527 DBG   Disabling the cluster service...
00000be8.00000ae0::2014/09/22-12:23:37.527 DBG   Stopping the cluster service...
00000128.000008c8::2014/09/22-12:23:37.527 INFO  [CS] Service Stopping...
00000128.000008c8::2014/09/22-12:23:37.527 INFO  [CORE] Node quorum state is 'Not yet formed or joined a cluster'. Form/join status with other nodes is as follows:
00000128.000008c8::2014/09/22-12:23:37.527 INFO  [DCM] UnregisterSwProvider(): CSV providers are not registered
00000128.000008c8::2014/09/22-12:23:37.527 WARN  [QUORUM] Node 1: weight adjustment not performed, as all remanining voters have weight zero
00000128.000008c8::2014/09/22-12:23:37.527 INFO  [RGP] node 1: MergeAndRestart +() -(1)
00000128.000008c8::2014/09/22-12:23:37.542 INFO  [RGP] sending to 64 nodes 1: 001(1) => 101() +() -(1) [()]
00000128.00000950::2014/09/22-12:23:37.542 INFO  [CORE] Node 1: Proposed View is <ViewChanged joiners=() downers=(1) newView=101() oldView=001(1) joiner=false form=false/>
00000128.000008c8::2014/09/22-12:23:37.652 INFO  [DM]: Shutting down, so unloading the cluster database.
00000128.000008c8::2014/09/22-12:23:37.652 INFO  [DM] Shutting down, so unloading the cluster database (waitForLock: true).
00000128.000008c8::2014/09/22-12:23:37.652 INFO  [CS] Service Stopped...
00000128.000008c8::2014/09/22-12:23:37.652 INFO  [CS] About to exit service...
00000be8.00000ae0::2014/09/22-12:23:39.555 DBG   Releasing clustered storages...
00000be8.00000ae0::2014/09/22-12:23:39.556 DBG   Getting clustered disks...
00000be8.00000ae0::2014/09/22-12:23:39.556 DBG   Waiting for clusdsk to finish its cleanup...
00000be8.00000ae0::2014/09/22-12:23:39.556 DBG   Clearing the clusdisk database...
00000be8.00000ae0::2014/09/22-12:23:39.556 DBG   Waiting for clusdsk to finish its cleanup...
00000be8.00000ae0::2014/09/22-12:23:39.556 DBG   Relinquishing clustered disks...
00000be8.00000ae0::2014/09/22-12:23:39.556 DBG   Opening disk handle by index...
00000be8.00000ae0::2014/09/22-12:23:39.603 DBG   Getting disk ID from layout...
00000be8.00000ae0::2014/09/22-12:23:39.603 DBG   Reset CSV state ...
00000be8.00000ae0::2014/09/22-12:23:39.603 DBG   Relinquish disk if clustered...
00000be8.00000ae0::2014/09/22-12:23:39.628 DBG   Opening disk handle by index...
00000be8.00000ae0::2014/09/22-12:23:39.676 DBG   Getting disk ID from layout...
00000be8.00000ae0::2014/09/22-12:23:39.676 DBG   Reset CSV state ...
00000be8.00000ae0::2014/09/22-12:23:39.676 DBG   Relinquish disk if clustered...
00000be8.00000ae0::2014/09/22-12:23:39.693 DBG   Opening disk handle by index...
00000be8.00000ae0::2014/09/22-12:23:39.753 DBG   Getting disk ID from layout...
00000be8.00000ae0::2014/09/22-12:23:39.768 DBG   Reset CSV state ...
00000be8.00000ae0::2014/09/22-12:23:39.768 DBG   Relinquish disk if clustered...
00000be8.00000ae0::2014/09/22-12:23:39.768 DBG   Opening disk handle by index...
00000be8.00000ae0::2014/09/22-12:23:39.784 DBG   Resetting cluster registry entries...
00000be8.00000ae0::2014/09/22-12:23:39.784 DBG   Resetting NLBSFlags value ...
00000204.00000804::2014/09/22-12:23:39.800 INFO  [CAM] In NotificationHandlerThread
00000204.00000804::2014/09/22-12:23:39.800 INFO  [CAM] NotificationHandlerThread - Setting primary account refresh
00000be8.00000ae0::2014/09/22-12:23:39.852 DBG   Unloading the cluster Windows registry hive...
00000be8.00000ae0::2014/09/22-12:23:39.852 DBG   Getting the cluster Windows registry hive file path...
00000be8.00000ae0::2014/09/22-12:23:39.852 DBG   Getting the cluster Windows registry hive file path...
00000be8.00000ae0::2014/09/22-12:23:39.852 DBG   Getting the cluster Windows registry hive file path...


Reclaim disk space after deleting VHD

$
0
0

Hello,

There is a case where a Hyper-V cluster is installed. We created an new LUN ,than we moved one VHD from the existing LUN to the new one but the free space is still the same on the first LUN. ( example : if the free Space is 1.5 TB we deleted a 0,5 TB the free space should be 2TB but it's still 1.5TB)

How can reclaim the deleted space??

What could be problem? and what the best practices for creating LUNs?

Regards,

new failover cluster and hyper-vm migration

$
0
0
We have 2 physical servers that are hosts to all our hyper-vm. We have a 2-node failover cluster setup with all the hyper-vm storage on the clustered storage which is a iSCSI virtual HD.

1. Can we create a new failover cluster and move all the hyper-vm to the new cluster? If yes, how? What are the parameters I need to be aware inorder for a seamless migration.

2. If we move the vhdx file to a new machine, can we bring up the hyper-vm to its most recent state with all the settings or do we need anymore files along with the vhdx?

3. Backing up the VHDX files? or any additional files?

4. can we create a hyper-vm on azure with the vhdx file? how?

The reason we need to move to a new failover cluster which is on our 2ndary storage is to reset the primary storage and reconfigure from scratch.
Viewing all 5648 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>