Updated 9/28/2017 – Including direct references to Ignite content relevant to architecture

Ignite 2017 has turned out to be quite the stir for Unified Communications…err…I mean, Intelligent Communications. The big news that Microsoft intends to (eventually) sunset Skype for Business Online in favor of Microsoft Teams has once again significantly altered the trajectory of partners and customers consuming Microsoft’s communications services.  While much can be said about the pros & cons of this approach, the end result is that customers and partners (myself included) must change and adapt.  The back-end processes and infrastructure of Microsoft Teams is a bit of a mystery with limited technical information when compared to Lync/Skype for Business.  Given that this information will begin to come out over time as Microsoft enhances Teams with the IT-policy controls and documentation that existed for Skype4B, I realized that some insights can be gathered by some old-fashioned manual work: that’s right…simple network traces have proven to be hugely informational and provides a peek into the inner-workings of Teams.  With that in mind, what follows are pieces of information I was able to gleam, with the caveat that the information will be updated/corrected later on, as Microsoft begins to release official information that will supersede the info I have here.

Sign In Process Information

For any seasoned Lync/Skype admin, we all know that specific DNS records are required in order for the client to discover the FQDNs for the pools the accounts is homed to.  The autodiscover process is (relatively) well documented and often times poorly understood (and implemented).  For Teams, there is no hybrid support – you’re all-in within the cloud.  Microsoft doesn’t explicity document what FQDNs are used…but Wireshark or Message Analyzer will!

Upon application start, Teams initially performs a DNS A record query for:

  • pipe.skype.com

The DNS query response gives us the first clue that Microsoft’s usage of CDN networks has begun to creep into its UC (IC) platform.  Two separate CNAME records are returned for this query:

  • pipe.prd.skypedata.akadns.net
  • pipe.cloudapp.aria.akadns.net

The resulting IP address is 40.117.100.83, but given the usage of CDN is in play, this IP address will vary for others across the globe. Indeed, the akadns.net domain is owned by Akamai and is part of their global CDN network. An examination of the final CNAME record shows that at least 11 separate IP addresses are available across the globe!

There is a good deal of TLS encrypted traffic following the resolution of pipe.cloudapp.aria.akadns.net, but eventually another DNS query is triggered for:

  • teams.microsoft.com

The DNS query response gives us a separate CNAME record:

  • s-0001.s-msedge.net

The resulting IP address is 13.107.3.128, but an important note is that the FQDN of the IP is associated with the Microsoft Edge node network, msedge.net.  The IP address resolution across the globe for this FQDN is the same which leads me to believe that Microsoft has begun to migrate some Teams traffic to utilize AnyCast, thus ensuring clients take the shortest path to ingress to the Microsoft network.  It also may be possible that there is only one ingress point for this name and Geo-DNS and/or AnyCast is not is use, but I’m not sure if that would be the case.

Following the connection to the edge node, authentication requests occur and I’m prompted for Modern Authentication credentials.  The process happens largely outside of the FQDNs and IP blocks that Microsoft lists for Teams (login.microsoftonline.com), so I won’t cover the details here.  Following completion of the authentication process, however, the client then continues communications to pipe.cloudapp.aria.akadns.net.

A few thousand packets later, another DNS query comes across:

  • us-api.asm.skype.com

The DNS query response gives another entry point into the CDN networks via another CNAME query:

  • us-api.skype-asm.akadns.net

The resulting IP address is 40.123.43.195, but given the usage of CDN is in play, this IP address will vary for others across the globe. An examination of the final CNAME record shows that at least 2 separate IP addresses are available across the globe.

I cannot really speculate what the us-api FQDN is for, but it sure seems like a Front End system because shortly thereafter, my client is returned a very specific geo-localized FQDN that is queried for:

  • amer-client-ss.msg.skype.com

The DNS query response gives multiple CNAME references:

  • amer-client-ss.msg.skype.com.akadns.net
  • skypechatspaces-amer-client-geo.msg.skype.com.akadns.net
  • bn3p-client-dc.msg.skype.com.akadns.net

The IP address returned is 40.84.28.125, but the amount of CNAME referrals and even the name of the FQDNs leads one to believe that several layers of CDN and/or Geo-DNS localization are potentially occurring.

Note:  I’m skipping several DNS queries just to keep things short(er), but know that there are 3-4 other FQDNs and referrals I am leaving out for brevity sake.

There was a critical note made during an Ignite presentation that the Teams infrastructure was built to run on Azure, and eventually a DNS query crossed the wire that proves it:

  • api.cc.skype.com

The DNS query response gives multiple CNAME references:

  • api-cc-skype.trafficmanager.net
  • a-cc-usea2-01-skype.cloudapp.net

Big Deal…So Where’s Azure?

The answer to that, is in the CNAME FQDNs above:

  • trafficmanager.net
  • cloudapp.net

Both of these domains are owned and utilized by Azure.  Each has its own purpose, mind you, as Traffic Manager is designed to direct client requests to the most appropriate endpoint based on health status and traffic routing methods, while CloudApp FQDNs are used when architects build an app or service within Azure.  This is the “proof in the pudding”, as they say, that Microsoft really is putting on their chips on Azure as the future of the cloud, folks:

The Teams service really does operate via Azure and Microsoft is using their own tools and services to optimize the traffic:

What About Skype4B Interop?

While is it true that Teams has a brand new infrastructure, the Teams client does still offer some backwards compatibility with Skype4B.  Indeed the DNS queries prove that there absolutely is connectivity to at least some portion of the Skype4B Online infrastructure:

  • webdir.online.lync.com
  • webdir1a.online.lync.com
  • webdir2a.online.lync.com
  • webpooldm12a17.infra.lync.com

There’s no configuration in the client anywhere for the legacy webdir discovery record, so this must be a hard-coded behavior that triggers the resolution process.

But What About Media?

Of all the unknowns most interesting to me about Teams, it’s the media stack.  Lync/Skype4B had very robust media stacks that were configurable to an extent (more so for on-premises customers).  Teams, however, largely has little information known about media.  A few things we can safely assume:

  • Given that Teams & Skype4B can interop, that means ICE, STUN, and TURN are used.
  • Audio and video codecs between Teams & Skype4B offer at a minimum Silk and H.264UC, but also (hopefully) G.722 and yes, even RTAudio
  • Media is, as expected, encrypted by SRTP

Given that little can be known without examining ETL files, I’m surmising a few details and noticing a few others….  The following details were noticed when joining a Teams-native conference, including IP audio, IP video, and screen share.

1 – Skype AnyCast Servers are in the Mix

Fire up a conference and you will indeed see the Teams client fire off STUN requests to the global Skype AnyCast IP of 13.107.8.22:

The traffic itself does NOT remain there, but there were 33 packets sent to-and-fro the AnyCast IP.  Indeed the Skype Network Testing Tool is similar as only the first sets of packets are sent to the AnyCast IP before the traffic is offloaded to a different IP…

The second IP referenced is short-lived as well, with a total of only 51 packets in total.

2 – Teams Edge Servers?

What seems very interesting is that for a time STUN traffic seems to be duplicated to multiple IP address destinations:

  • 104.44.195.205
  • 23.100.65.165

The duplicate traffic flows exist for the start of the call, but then traffic settles on what appears to be a direct path to the 23.100.65.165 IP address, accounting for 8,303 packets:

The final flow above looks like a similar connection you would expect to see when an external Skype4B client is connecting to the 50K port range of a call negotiated through the external interface of an edge server.  Seems like ICE, STUN, TURN are definitely at play…

3 – Source Ports seem to be Different

For enterprise customers, Skype4B offered defined source ports you would see client traffic originated from (50,000-50,059 UDP/TCP).  Teams, it seems, (HA – unintentional rhyme) does not adhere to those same ports.  I count at least three separate source ports utilized by my client when communicating to the cloud MCU:

  • 8085->51261
  • 20878->53692
  • 26563->59150

It was difficult to determine which modality was using which source port unfortunately (and especially difficult since Teams doesn’t produce logs that can be examined in Snooper), but I’m pretty confident that 8085 was my audio stream.  The other two were video and/or desktop share.

The port change is surprising and worrisome, as enterprise customers cannot police QoS without having pre-defined ports available, such as the previous configuration in Skype4B.

4 – STUN Ports Still Standard (Mostly)

UDP 3478 is known as the port used for STUN, and the Teams client definitely uses it:

UDP 3479-3481 were recently added to Microsoft’s requirements for Teams & Skype4B, but I cannot find a single packet that used it.

This port usage is likely still down the road before it is really ready for prime-time, perhaps?

Final Thoughts

There are so many unknowns to go through regarding the Teams infrastructure and the client.  Microsoft will definitely begin releasing this information over time now that announcements are public, and some of this information may be updated, solidified, or removed.  At a minimum, it’s an interesting dig into the product…all from a little network sniffing!

4 thoughts on “Examining Network Traffic for Microsoft Teams in Office365

    1. Thanks, B-Ry! It is about as informative as possible with the limited info I have. We’ll see how it changes in the coming weeks… 😉

  1. Very well-done.

    Do you think Microsoft would ever support customers having their own CNAME records pointing to teams (or any other O365 app)? An attempt on my part returns this interesting message:

    Our services aren’t available right nowWe’re working to restore all services as soon as possible. Please check back soon.Ref A:…..Ref B:….Ref C:…..

    1. I suppose there are scenarios where CNAME records in customer DNS Zones could refer to records in Microsoft zones, but the use case is likely very, very small. Given restrictions like HSTS and Geo-DNS referrals and Traffic Manager operations, I honestly don’t expect Microsoft to ever allow customers to refer to a CNAME buried deep in their infrastructure. They will request you to use top-level public DNS records to enter a particular service and then they will optimize the traffic on your behalf, referring you where you need to go.

      Regarding Teams…no, I don’t believe you’ll ever be able to do what you are asking.

Comments are closed.