User Experience, Wearable technology, and other critical aspects for EMM (Enterprise Mobility Management) projects

December 5th, 2014 No comments


I want to cover one of the most important parts of Enterprise Mobility Management (EMM), the user experience. Too often us technical types get hung up on the technical ‘what can it do’ and ‘where in the organization does this fit’ aspects of technology. Sometimes you have to take a big step back and look at it from the user’s perspective. They are used to consumer level iPhone and Android apps. If your solution isn’t as easy to digest for them as the consumer apps they use daily, I promise you they will reject your solution.

A fragmented user experience can easily happen if you build silos of technology. One vendor for this managed by this guy, one vendor for that managed by that guy, etc. Build an ecosystem instead. All mobility products should in some regard be converged. Not all vendors can do that but now many of the big names in EMM are starting to see that. What’s with everyone naming everything Workspace now? They’re finally getting it’s about the user experience and the user’s “stuff” presented to them in one location. When you fracture the user experience by selecting vendors that don’t integrate, that leaves a void. And users will find a way to fill those voids where there is no integration. They are practically trained to fill voids by the very consumer devices they carry. Native phone apps like Apple Maps vs. going out and finding Google Maps, Waze, etc. in the App Store. What voids the manufacturer of their device left out, they’ll find an app for that.

At the moment my career is very Citrix focused. It was VMware for the longest time. And down the road it might be something else. The tech I work on changes, big deal. But users don’t. They drive the business. They drive IT both positive and negative. A data breach happens, IT has to react. Users want BYOD, IT has to react and allow it. IT isn’t about information technology only. Everything we do is end user driven. Whatever the newest disruptive technology is, it’s usually something IT tries to fight at first because they don’t fully understand it or don’t have any clue how to manage it. Then IT finally gives up and tries to react to it because the users demand it.

I blame Steve Jobs for this. This all started with the iPod way back in 2001. That very first iPod softened the masses to the idea of taking their “stuff” with them. Their whole music library in a tiny easy to use device they can walk around with. It taught users what mobility is. In 2004 iPods began to dominate the market because Apple listened to guess what…their users…and released iTunes for Windows. Now all of a sudden iPods were compatible with everyone’s systems. The iPhone was released in 2007, the next natural progression for mobility, take your music and everything else with you in a single device. Enterprises were shaking in their boots. Users wanted corporate email on the their shiny new personal iPhones but companies were still pushing Blackberry and BES and totalitarian control over the devices. And what did users do, they forced the enterprise to change. Change their policies and change their way of looking at user devices. Tada, BYOD was born and if you couldn’t get iOS, Android, and Windows phones working with your email system securely, you soon would be.

My advice to Enterprises looking into EMM is probably a bit unconventional. I believe anything in the Mobility space should be driven by consumer services more so than enterprise services. Watch the big boy services like Facebook fail and lose user base because of horrible UI. Incomprehensible privacy settings. Useless timeline drivel. Users are actively seeking out others ways to satisfy their social media needs just like they find a way to get around every enterprise block you put in place. Users are far more tech savvy these days than they were 5 years ago. Watch the consumer market closely. Let’s say you want to setup enterprise file share and synch. Let Google Drive and Dropbox duke it out. Watch why users pick one service over another. Consumer IT is the biggest form of UAT we have. You get all those valuable UAT cycles by merely watching what the consumer industry is doing. Then build the same type of winning service within the Enterprise and watch your users flock to it. You can’t take away a consumer level service they are used to and offer them nothing in return. In this age of hyper converged storage and infrastructure, think of enterprise mobility as the ultimate hyper convergence. The converging of personal and corporate user experience. To the user, this should be absolutely seamless.


Have you thought about Wearable technology and EMM yet? You will be soon. The Wearable devices space right now is so new, no one knows what’s going to happen. Even the manufacturers don’t know what to do. Square face or round for your watch? Rubber strap and AMOLED or metal strap and LCD? Let Google and Apple duke out what works and what doesn’t. Does the user what info driven from their phone or should it be a standalone device? Do they want interactive apps or informational notices? Do they want it auto filtered/smart filtered or an avalanche of info coming to it all the time? Do they want location aware info? How about task aware? Example, I’m in a sales meeting with Bob and Susan and the fact that Bob just had a kid was auto-skimmed from a LinkedIn post and it is beamed to my watch and I now engage Bob on a personal front about this new baby and thus build a repertoire I otherwise wouldn’t have been able to? How about habit based? I get coffee and stop by Mark’s office to chat every morning. I want football scores and last night’s TPS reports around this time every day so I can talk to Mark about it so it automatically pops up on my watch. Who knows what’s going to happen. No one can predict it right now. The User Experience folks at Apple, Google, Samsung, etc. are working hard to try and guess what the user wants. What is not disruptive to the user and actually adds value to their lifestyle. IT has to think the same way and apply the same concept to the “workstyle”. Lifestyle and workstyle should essentially be the same experience to the user.

Be wary of EMM vendors selling technology and no long term vision. They’re playing catch up. They are reactionary. One of their competitors does it, so then they start looking into how they should implement it within their own solution so they stay competitive. Avoid those that don’t see the bigger picture which is full blown ecosystem accessible and functional in some regard from the largest to the tiniest screens. We’re not in the age of pocket devices and smartphones anymore. That’s old news in the EMM space. We’re entering a new age of wearables. Those little watches, glasses, etc. are all eventually gonna be sold by Verizon, AT&T, etc with 4G/LTE connectivity in a few years. But they won’t have the processing power. They won’t be easy to interact with, but users will demand it. We don’t live in the Tony Stark age touching holograms with a Jarvis like mainframe driving it all. We’re still taking baby steps. So until then the Datacenter will be the processor, 4G the transport, and your EMM solution the delivery mechanism. What we stream in from the datacenter and can provide on the tiniest screen will be the future of wearable computing and future of EMM in my opinion. Yeah not everyone is ready to wear goofy glasses on their face yet but people will be lining up for watches because it’s not disruptive to their lifestyle. It simply adds value.

Next year once the Apple Watch is released, we might see a good number of folk be the early adopters and buy the gen 1 device. And a year after than when the gen 2 device is out, people will stampede to it just like they did with the iPod and iPhone. And what will users do next? They’ll ask for their corporate “stuff” on their watch right next to their Gmail app. For those early adopters of wearable devices, swipe swipe swipe Gmail….swipe wipe swipe corporate mail. You know that’s not gonna fly. Users are going to demand something much more fluid on that tiny screen. One click access to each one or user will reject it and find some consumer level app that displays email and IM better and suddenly you’re network is exposed on another front. How you manage to get corporate info to flow on to the tiniest screen will become the biggest challenge in the next few years.

I always love to talk security so I can’t leave that out. What about MDM (mobile device management) for wearables? Just the other day I had a co-worker showing off his Android Wear watch. While another co-worker was talking to him, the watch picked up on this second co-worker’s foreign accent and began to display info on the country the accent was from. Can you imagine what this will mean for the corporate work place? Intellectual property, previously air gapped SCADA systems (supervisory control and data acquisition) or ICS (industrial control system) info, trade secrets, etc. are all exposed to the cloud by merely speaking about them out loud! These wearables may have NFC (near field communication) so what does this imply for air gapped systems or badge readers? Do see how EMM vendors have to start planning now how to tackle and innovate in this space before wearables go fully mainstream and leak into the corporate world? You want to pick the EMM vendor that is already preparing and planning for this scenario. No, no one is going to connect a bluetooth mouse to his watch and attempt to use a virtual desktop on it right this second. But they will try at some point, someone always does. And you have to be prepared to manage that experience and anything else your users and the consumer world throw at you.

At the end of the day, when you fracture the user experience by selecting vendors that don’t integrate, you will leave your users wanting to find ways to fill voids where there really shouldn’t be any in the first place or they will outright reject your solution. If choose a vendor that isn’t already considering what’s on the horizon with “disruptive” technology like Wearables, then you need to start asking them those questions. Be proactive and watch the what’s going on with consumer devices and consumer services. Your goal should be to become a master on how to provide an Enterprise “Workstyle” that matches the user’s “Lifestyle”.

Citrix XenApp 7.x VDA Registration State stuck in Initializing and a self healing powershell script to fix it

October 16th, 2014 No comments

On XenApp 7.5 servers (and possibly 7.1 and 7.0), you may notice the registration state of the machine is stuck on “Initializing” in Citrix Studio and no one will be able to launch any apps. I’m still investigating if the 7.6 VDA also has this behavior. This is how it looks in Studio:


You can also run the following powershell command:

Add-PSSnapin Citrix.*.Admin.V*
Get-BrokerMachine | select MachineName, SummaryState, MachineInternalState, 


You will notice the impacted server in question has a MachineInternalState set to “Unavailable” and the RegistrationState is stuck on “Initializing”. We’ve noticed this happening on XenApp servers that have been up around the 24 or 25 day mark. They suddenly stop being registered.

The work around is to restart the “Citrix Desktop Service” on the impacted server. The service is not in a Stopped state so it’s hard to setup monitoring using a 3rd party monitoring tool. I just wrote this PowerShell script that queries the delivery controller for the actual state of registration on every XenApp server to see if it’s stuck Initializing, restart the Citrix Desktop Service on the impacted servers, log a .csv file with the names of the impacted servers for historical purposes, and send an email notification out to the Citrix admins and NOC. I have this running as a scheduled task (stacked a few min apart) every 5 minutes on all my delivery controllers:

##Written by Jason Samuel -

Add-PSSnapin Citrix.*.Admin.V*

$Results = @()
$Date = (Get-Date -DisplayHint Date)
$save_date = $Date.ToString("MM-dd-yyyy-hh-mm-ss-tt")

$Results += Get-BrokerMachine -RegistrationState "Initializing" | select DNSname, AgentVersion

If (!$Results) 

Else {
##Restart the Citrix Desktop Service on stuck servers
foreach ($vda in $Results)

Restart-Service -InputObject $(get-service -ComputerName $vda.DNSName -Name "Citrix Desktop Service") -Verbose
$EmailBodyList += $vda.DNSName + "`t" + $vda.AgentVersion + "`r`n"

##Writes the result to a CSV file on your delivery controller for historical purposes
$file_output = ('D:\Citrix_VDA_Restart\XA_VDA_' + $save_date + '.csv')
$Results | Export-CSV -Path $file_output -NoTypeInformation

##Sends an email with the list of servers in the body as well as the CSV attachment
$filename = $file_output
$smtpServer = “YOURSMTPSERVER”
$msg = new-object Net.Mail.MailMessage
$att = new-object Net.Mail.Attachment($filename)
$smtp = new-object Net.Mail.SmtpClient($smtpServer)
$msg.From = “”
$msg.Subject = “XenApp 7.6 servers stuck Initializing”
$msg.Body = “The below XenApp 7.6 servers are stuck Initializing on the Delivery Controllers. This script is now attempting to 

restart the Citrix Desktop Service on the impacted servers and force the VDA to register.” + "`r`n" + "`r`n" + $EmailBodyList

Start-Sleep -s 5


WordPress will run the code off the page so you can highlight all of the above and paste to Notepad or download it here in text format (make sure you right click – save as). Just change the extension to .ps1 and set your path for the CSV files, SMTP server, and email addresses you want notifications to go to:


Anyhow, you will get a nice notification email like this when servers get stuck. In this example I had 2 servers that were stuck Initializing and by the time the email was received the VDA had already been restarted on both and were Registered again. :)

I’m hoping this is fixed in the the 7.6 VDA and am in the process of testing. I really don’t like to have to setup scripts to monitor service health like this but in a pinch, running a self-healing monitoring script is your best bet to prevent an application outage until the issue is resolved. I’m currently monitoring all servers with the 7.6 VDA and will update this post if I see them exhibit the same behavior as the 7.5 VDA.

Just updated the script above to include the agent version. Here’s how your notification email and .csv file will look:



Citrix console failed to remove the server from the farm with error code 80000007

October 1st, 2014 No comments

Had to clean up a Presentation Server 4.0 farm today. Yes you heard that right, PS 4.0. Yuck. Lots of servers that were decommed but never removed from the farm. I needed to eliminate the junk and see what was really going on with this farm and get the apps migrated to XenApp 7.6. So basically I had a bunch of dead servers that needed to be removed from the farm by force before I could proceed with my analysis of the good apps that were left. If you go through the console and try and remove the each server one by one, you would get this message:

The Presentation Server Console failed to remove the server. Error Code: 80000007


Do the following via command line:


you will see it’s a member of the farm.

So let’s hit the data store and kill it. Do a:

dscheck /full servers /clean

servers will remain most likely since you verified with the qfarm it’s not really an inconsistent record and technically valid. The only option is to forcefully delete it:

dscheck /full servers /deletemf SERVERNAME

Keep in mind the SERVERNAME is case sensitive so when you do a qfarm and the server name is all capital letters, you must type it in as all capital letters. Now do the clean command again:

dscheck /full servers /clean

Now when you reopen the console and do a discovery, the server will be gone for good.

XenDesktop VMs on PVS getting Netlogon 5719 and Group Policy 1129 errors

August 11th, 2014 No comments

After reverse imaging and doing a XenServer Tools update, I had some issues talking to the DC and getting group policies to apply consistenly at boot. When XenServer Tools is updated, it may remove and re-add the NICs. This changes the provider order sometimes.

Symptoms were there were several Netlogon 5719 errors:


This computer was not able to set up a secure session with a domain controller in domain XXXXXXXX due to the following:
There are currently no logon servers available to service the logon request.
This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator.
If this computer is a domain controller for the specified domain, it sets up the secure session to the primary domain controller emulator in the specified domain. Otherwise, this computer sets up the secure session to any domain controller in the specified domain.

and Group Policy 1129 errors in the event log:

The processing of Group Policy failed because of lack of network connectivity to a domain controller. This may be a transient condition. A success message would be generated once the machine gets connected to the domain controller and Group Policy has succesfully processed. If you do not see a success message for several hours, then contact your administrator.


To resolve this I did the following:

1. Verify the streaming NIC is at the top. In my case, NIC 3 is my streaming NIC and NIC 4 is my local network traffic NIC. So it should look like this:


2. Adjusted the provider order so “Microsoft Windows Network” is on top and Symantec or whatever antivirus you are using is at the very bottom:



3. Created the following registry subkey:


Value Name: ExpectedDialupDelay
Data Type: REG_DWORD

Data Range is between 0 and 600 seconds (10 minutes) and the default is 0. I set it to 600 seconds. The thought is Netlogon is starting before the NIC is fully up. This is a timing issue evidenced by group policy errors like the following where the number of ms it set to 0, meaning it didn’t even try to talk to the DC:


Setting the delay to 10 minutes gives the VM some buffer room to get the network up.

How to use PsExec and Xcopy to pull data off a large number of remote machines simultaneously

July 30th, 2014 No comments

My co-worker and I were in a bit of a pinch recently and had to quickly pull data off a large number of VMs as fast as possible. We had a real time crunch to get it done.


Well, Ctrl+C Ctrl+V is one way to do it but you’re better than that. The easier way is to use PsExec and Xcopy/Robocopy to do it. Getting PsExec and Xcopy to play nicely is sometimes a bit tricky. Here’s a really quick and dirty script to get it done. Not the most efficient but it will work in a pinch

1. Copy psexec into a folder on the server you plan to copy your data to. Let’s call this drive D:\DataBackup. Now share it out to Everyone with Read/Write access.

2. Create a .bat file with the following. Let’s call it remotescript.bat and let’s say you are after user profile data. So:

xcopy "\\%computername%\c$\Users" 
"\\yourservername\DataBackup\%computername%\" /e /h /c /y 

Let me explain what it does. Xcopy is invoked to look at the local VM’s name under the c:\Users folder. It then excludes and directories you specify in an excludes.txt file located on the file share you created. Then it copies whatever is left to the file share you created under a directory with the VM’s name. The switches are doing the following

/e – Copies all subdirectories, even empty ones
/h – Copies files with hidden and system file attributes
/c – Ignores errors
/y – Suppresses overwrite file confirmation prompts

and when it’s all done exits.

3. Create the excludes.txt file in this same folder. This will contain all the directories you don’t want to copy over. Here’s an example of mine:

\All Users

You can even do a wildcard where if it begins with something it won’t copy it. For example, if I have several test accounts that all start with “Test_userID” then I would say:


4. In your file share, create a file called VMnames.txt and have 1 name per line. Pretty simple. Export it from wherever you want and massage the data in Excel if you need to. Text to Columns works wonders.

5. Now open up a cmd prompt and go to the folder you have everything staged so far. Run the following where the user ID has admin rights to the VMs:

PsExec -u domain\userID -p xxxxxxxxxx @VMnames.txt -d -c -f remotescript.bat

Now let me explain this. PsExec is invoked and will pass your user ID and password to the remote machines specified in VMnames.txt and run remotescript.bat. Here is what the switches are doing:

/d – Don’t wait for the script to finish running on each VM. Basically you are telling the script to run on all VMs in parallel. Otherwise you’ll be sitting around all day as each VM finishes copying.

/c – Copy the remotescript.bat to the remote machine

/f – Force the copy in the event remotescript.bat was already copied to the machine. Comes in handy if you did some testing on a few VMs first before letting the script loose on all the VMs.

Hope this helps. You can also use Robocopy which is what I prefer over Xcopy. Just modify the above accordingly. Here’s a standalone Robocopy script I like to use to copy all files, empty folders, and ACLs while still retaining time stamps. Comes in handy all the time:

robocopy "c:\SourceFolder" "\\ServerName\c$\DestinationFolder" 
/E /ZB /DCOPY:T /COPYALL /R:1 /W:1 /V 
/TEE /LOG:Robocopy.log

Here is what the switches mean:

/E – Copy subfolders including empty folders
/ZB – Use restartable mode
/DCOPY:T – Copy file directory timestamps
/COPYALL – Copy all file info (data, attributes, time stamps, ACL, owner info, and auditing info)
/R:1 – Retry failed copies once
/W:1 – Wait 1 second between retries
/V – Verbose logging
/TEE – Writes the status to the console window and to the log file
/LOG: – Specifies log file and will overwrite if there is already one named the same