diff --git a/articles/active-directory/active-directory-conditional-access-device-policies.md b/articles/active-directory/active-directory-conditional-access-device-policies.md index 06c19bcc5914f..d5ffe9c677aea 100644 --- a/articles/active-directory/active-directory-conditional-access-device-policies.md +++ b/articles/active-directory/active-directory-conditional-access-device-policies.md @@ -1,6 +1,6 @@ --- -title: Conditional access device policies for Office 365 services | Microsoft Docs -description: Details on how device-based conditions control access to Office 365 services. While Information Workers (IWs) want to access Office 365 services like Exchange and SharePoint Online at work or school from their personal devices, their IT admin wants the access to be secure.IT admins can provision conditional access device policies to secure corporate resources, while at the same time allowing IWs on compliant devices to access the services. +title: Azure Active Directory conditional access device policies for Office 365 services | Microsoft Docs +description: Learn about how to provision conditional access device policies to help make corporate resources more secure, while maintaining user compliance and access to services. services: active-directory documentationcenter: '' author: MarkusVi @@ -17,25 +17,28 @@ ms.date: 05/18/2017 ms.author: markvi --- -# Conditional access device policies for Office 365 services +# Active Directory conditional access device policies for Office 365 services -The term, “conditional access” has many conditions associated with it such as multi-factor authenticated user, authenticated device, compliant device etc. This topic primarily focusses on device-based conditions to control access to Office 365 services. While Information Workers (IWs) want to access Office 365 services like Exchange and SharePoint Online at work or school from their personal devices, their IT admin wants the access to be secure. IT admins can provision conditional access device policies to secure corporate resources, while at the same time allowing IWs on compliant devices to access the services. Conditional access policies to Office 365 may be configured from Microsoft Intune conditional access portal. +Conditional access requires multiple pieces to work. It involves a multi-factor authenticated user, an authenticated device, and a compliant device, among other factors. In this article, we primarily focus on device-based conditions that your organization can use to help you control access to Office 365 services. -Azure Active Directory enforces conditional access policies to secure access to Office 365 services. An administrator can create a conditional access policy that blocks a user on a non-compliant device from accessing an O365 service. The user must conform to company’s device policies before access can be granted to the service. Alternately, the admin can also create a policy that requires users to just enroll their devices to gain access to an O365 service. Policies may be applied to all users of an organization, or limited to a few target groups and enhanced over time to include additional target groups. +Corporate users want to access Office 365 services like Exchange and SharePoint Online at work or school from their personal devices. You want the access to be secure. You can provision conditional access device policies to help make corporate resources more secure, while granting access to services for users who are using compliant devices. You can set conditional access policies to Office 365 in the Microsoft Intune conditional access portal. -A prerequisite for enforcing device policies is for users to register their devices with Azure Active Directory Device Registration service. You can opt to enable Multi-factor authentication (MFA) for registering devices with Azure Active Directory Device Registration service. MFA is recommended for Azure Active Directory Device Registration service. When MFA is enabled, users registering their devices with Azure Active Directory Device Registration service are challenged for second factor authentication. +Azure Active Directory (Azure AD) enforces conditional access policies to help secure access to Office 365 services. You can create a conditional access policy that blocks a user who is using a noncompliant device from accessing an Office 365 service. The user must conform to the company’s device policies before access to the service is granted. Alternately, you can create a policy that requires users to enroll their devices to gain access to an Office 365 service. Policies can be applied to all users in an organization, or limited to a few target groups. You can add more target groups to a policy over time. -## How does conditional access policy work? -When a user requests access to O365 service from a supported device platform, Azure Active Directory authenticates the user and device from which the user launches the request; and grants access to the service only when the user conforms to the policy set for the service. Users that do not have their device enrolled are given remedial instructions on how to enroll and become compliant to access corporate O365 services. Users on iOS and Android devices will be required to enroll their devices using Company Portal application. When a user enrolls his/her device, the device is registered with Azure Active Directory, and enrolled for device management and compliance. Customers must use the Azure Active Directory Device Registration service in conjunction with Microsoft Intune to enable mobile device management for Office 365 service. Device enrollment is a pre-requisite for users to access Office 365 services when device policies are enforced. +A prerequisite for enforcing device policies is that users must register their devices with the Azure AD device registration service. You can opt to turn on multi-factor authentication for devices that register with the Azure AD device registration service. Multi-factor authentication is recommended for the Azure Active Directory device registration service. When multi-factor authentication is turned on, users who register their devices with the Azure AD device registration service are challenged for second-factor authentication. -When a user enrolls his/her device successfully, the device becomes trusted. Azure Active Directory provides Single-Sign-On to access company applications and enforces conditional access policy to grant access to a service not only the first time the user requests access, but every time the user requests to renew access. The user will be denied access to services when sign-in credentials are changed, device is lost/stolen, or the policy is not met at the time of request for renewal. +## How does a conditional access policy work? -## Deployment considerations: +When a user requests access to an Office 365 service from a supported device platform, Azure AD authenticates the user and the device. Azure AD grants access to the service only if the user conforms to the policy set for the service. Users on devices that are not enrolled are given instructions on how to enroll and become compliant to access corporate Office 365 services. Users on iOS and Android devices are required to enroll their devices by using the Intune Company Portal application. When a user enrolls a device, the device is registered with Azure AD and it's enrolled for device management and compliance. You must use the Azure AD device registration service with Microsoft Intune for mobile device management for Office 365 services. Device enrollment is required for users to access Office 365 services when device policies are enforced. -You must use Azure Active Directory device registration service to register devices. +When a user successfully enrolls a device, the device becomes trusted. Azure AD gives the authenticated user single sign-on access to company applications. Azure AD enforces a conditional access policy to grant access to a service not only the first time the user requests access, but every time the user renews a request for access. The user is denied access to services when sign-in credentials are changed, the device is lost or stolen, or the conditions of the policy are not met at the time of request for renewal. -When users are about to be authenticated on premises, Active Directory Federation Services (AD FS) (1.0 and above) is required. Multi-factor authentication (MFA) for Workplace Join fails when the identity provider is not capable of MFA. For example, AD FS 2.0 is not MFA capable. Your administrator must ensure that the on-premises AD FS is MFA capable and a valid MFA method is enabled, before enabling MFA on the Azure Active Directory device registration service. For example, AD FS on Windows Server 2012 R2 has MFA capabilities. You must also enable an additional valid authentication (MFA) method on the AD FS server before enabling MFA on the Azure Active Directory device registration service. For more information on supported MFA methods in AD FS, see Configure Additional Authentication Methods for AD FS. +## Deployment considerations + +You must use the Azure AD device registration service to register devices. + +When on-premises users are about to be authenticated, Active Directory Federation Services (AD FS) (version 1.0 and later versions) is required. Multi-factor authentication for Workplace Join fails when the identity provider is not capable of multi-factor authentication. For example, you can't use multi-factor authentication with AD FS 2.0. Ensure that the on-premises AD FS works with multi-factor authentication, and that a valid multi-factor authentication method is in place before you turn on multi-factor authentication for the Azure AD device registration service. For example, AD FS on Windows Server 2012 R2 has multi-factor authentication capabilities. You also must set an additional valid authentication (multi-factor authentication) method on the AD FS server before you turn on multi-factor authentication for the Azure AD device registration service. For more information about supported multi-factor authentication methods in AD FS, see [Configure additional authentication methods for AD FS](/windows-server/identity/ad-fs/operations/configure-additional-authentication-methods-for-ad-fs). ## Next steps -See the [Azure Active Directory Conditional Access FAQ](active-directory-conditional-faqs.md) for more answers to common questions. +* For answers to common questions, see [Azure Active Directory conditional access FAQs](active-directory-conditional-faqs.md). diff --git a/articles/active-directory/active-directory-conditional-faqs.md b/articles/active-directory/active-directory-conditional-faqs.md index 1c7597fcec8f0..a9900ba52b985 100644 --- a/articles/active-directory/active-directory-conditional-faqs.md +++ b/articles/active-directory/active-directory-conditional-faqs.md @@ -1,6 +1,6 @@ --- -title: Azure Active Directory Conditional Access FAQ | Microsoft Docs -description: 'Frequently asked questions about conditional access ' +title: Azure Active Directory conditional access FAQs | Microsoft Docs +description: Get answers to frequently asked questions about conditional access in Azure Active Directory. services: active-directory documentationcenter: '' author: MarkusVi @@ -16,51 +16,43 @@ ms.date: 05/25/2017 ms.author: markvi --- -# Azure Active Directory Conditional Access FAQ +# Azure Active Directory conditional access FAQs ## Which applications work with conditional access policies? -**A:** Please see [Applications and browsers that use conditional access rules in Azure Active Directory](active-directory-conditional-access-supported-apps.md). - ---- +For information about applications that work with conditional access policies, see [Applications and browsers that use conditional access rules in Azure Active Directory](active-directory-conditional-access-supported-apps.md). ## Are conditional access policies enforced for B2B collaboration and guest users? -**A:** Policies are enforced for B2B collaboration users. However, in some cases, a user might not be able to satisfy the policy requirement if, for example, an organization does not support multi-factor authentication. -The policy is currently not enforced for SharePoint guest users. The guest relationship is maintained within SharePoint. Guest users accounts are not subject to access polices at the authentication server. Guest access can be managed at SharePoint. ---- +Policies are enforced for business-to-business (B2B) collaboration users. However, in some cases, a user might not be able to satisfy the policy requirements. For example, a guest user's organization might not support multi-factor authentication. + +Currently, conditional access policies are not enforced for SharePoint guest users. The guest relationship is maintained in SharePoint. Guest user accounts in SharePoint are not subject to access polices at the authentication server. You can manage guest access in SharePoint. ## Does a SharePoint Online policy also apply to OneDrive for Business? -**A:** Yes. ---- +Yes. A SharePoint Online policy also applies to OneDrive for Business. ## Why can’t I set a policy on client apps, like Word or Outlook? -**A:** A conditional access policy sets requirements for accessing a service and is enforced when authentication happens to that service. The policy is not set directly on a client application; instead, it is applied when it calls into a service. For example, a policy set on SharePoint applies to clients calling SharePoint and a policy set on Exchange applies to Outlook. ---- +A conditional access policy sets requirements for accessing a service. It's enforced when authentication to that service occurs. The policy is not set directly on a client application. Instead, it is applied when a client calls a service. For example, a policy set on SharePoint applies to clients calling SharePoint. A policy set on Exchange applies to Outlook. ## Does a conditional access policy apply to service accounts? -**A:** Conditional access policies apply to all user accounts. This includes user accounts used as service accounts. In many cases, a service account that runs unattended is not able to satisfy a policy. This is, for example the case, when MFA is required. In these cases, services accounts can be excluded from a policy, using conditional access policy management settings. Learn more about applying a policy to users here. ---- +Conditional access policies apply to all user accounts. This includes user accounts that are used as service accounts. Often, a service account that runs unattended can't satisfy the requirements of a conditional access policy. For example, multi-factor authentication might be required. Service accounts can be excluded from a policy by using conditional access policy management settings. -## Are Graph APIs available to configure configure conditional access policies? -**A:** not yet. +## Are Graph APIs available for configuring conditional access policies? ---- +Currently, no. -## Q: What is the default exclusion policy for unsupported device platforms? +## What is the default exclusion policy for unsupported device platforms? -**A:** At the present time, conditional access policies are selectively enforced on users on iOS and Android devices. Applications on other device platforms are, by default, unaffected by the conditional access policy for iOS and Android devices. Tenant admin may, however, choose to override the global policy to disallow access to users on unsupported platforms. +Currently, conditional access policies are selectively enforced on users of iOS and Android devices. Applications on other device platforms are, by default, not affected by the conditional access policy for iOS and Android devices. A tenant admin can choose to override the global policy to disallow access to users on platforms that are not supported. ---- -## Q: How do conditional access policies work for Microsoft Teams? +## How do conditional access policies work for Microsoft Teams? -**A:** Microsoft Teams relies heavily on Exchange Online and SharePoint Online for core productivity scenarios such as meetings, calendars, and files. Conditional access policies set up for these cloud apps apply to Teams during the sign-in experience. +Microsoft Teams relies heavily on Exchange Online and SharePoint Online for core productivity scenarios, like meetings, calendars, and file sharing. Conditional access policies that are set for these cloud apps apply to Microsoft Teams when a user signs in. -Microsoft Teams is also supported separately as a Cloud App in Azure AD Conditional Access policies and CA policy set up for this cloud app will apply to Teams during the sign-in experience. -Microsoft Teams desktop clients for Windows and Mac support modern authentication, which brings sign-on based on the Azure Active Directory Authentication Library (ADAL) to Microsoft Office client applications across platforms. +Microsoft Teams also is supported separately as a cloud app in Azure Active Directory conditional access policies. Certificate authority policies that are set for a cloud app apply to Microsoft Teams when a user signs in. ---- \ No newline at end of file +Microsoft Teams desktop clients for Windows and Mac support modern authentication. Modern authentication brings sign-in based on the Azure Active Directory Authentication Library (ADAL) to Microsoft Office client applications across platforms. \ No newline at end of file diff --git a/articles/active-directory/active-directory-saas-jobbadmin-tutorial.md b/articles/active-directory/active-directory-saas-jobbadmin-tutorial.md new file mode 100644 index 0000000000000..62512624d1f52 --- /dev/null +++ b/articles/active-directory/active-directory-saas-jobbadmin-tutorial.md @@ -0,0 +1,230 @@ +--- +title: 'Tutorial: Azure Active Directory integration with Jobbadmin | Microsoft Docs' +description: Learn how to configure single sign-on between Azure Active Directory and Jobbadmin. +services: active-directory +documentationCenter: na +author: jeevansd +manager: femila + +ms.assetid: c5208b0d-66a3-49ed-9aad-70d21f54aee0 +ms.service: active-directory +ms.workload: identity +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 06/01/2017 +ms.author: jeedes + +--- +# Tutorial: Azure Active Directory integration with Jobbadmin + +In this tutorial, you learn how to integrate Jobbadmin with Azure Active Directory (Azure AD). + +Integrating Jobbadmin with Azure AD provides you with the following benefits: + +- You can control in Azure AD who has access to Jobbadmin +- You can enable your users to automatically get signed-on to Jobbadmin (Single Sign-On) with their Azure AD accounts +- You can manage your accounts in one central location - the Azure portal + +If you want to know more details about SaaS app integration with Azure AD, see [what is application access and single sign-on with Azure Active Directory](active-directory-appssoaccess-whatis.md). + +## Prerequisites + +To configure Azure AD integration with Jobbadmin, you need the following items: + +- An Azure AD subscription +- A Jobbadmin single-sign on enabled subscription + +> [!NOTE] +> To test the steps in this tutorial, we do not recommend using a production environment. + +To test the steps in this tutorial, you should follow these recommendations: + +- Do not use your production environment, unless it is necessary. +- If you don't have an Azure AD trial environment, you can get a one-month trial [here](https://azure.microsoft.com/pricing/free-trial/). + +## Scenario description +In this tutorial, you test Azure AD single sign-on in a test environment. +The scenario outlined in this tutorial consists of two main building blocks: + +1. Adding Jobbadmin from the gallery +2. Configuring and testing Azure AD single sign-on + +## Adding Jobbadmin from the gallery +To configure the integration of Jobbadmin into Azure AD, you need to add Jobbadmin from the gallery to your list of managed SaaS apps. + +**To add Jobbadmin from the gallery, perform the following steps:** + +1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon. + + ![Active Directory][1] + +2. Navigate to **Enterprise applications**. Then go to **All applications**. + + ![Applications][2] + +3. To add new application, click **New application** button on the top of dialog. + + ![Applications][3] + +4. In the search box, type **Jobbadmin**. + + ![Creating an Azure AD test user](./media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_search.png) + +5. In the results panel, select **Jobbadmin**, and then click **Add** button to add the application. + + ![Creating an Azure AD test user](./media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_addfromgallery.png) + +## Configuring and testing Azure AD single sign-on +In this section, you configure and test Azure AD single sign-on with Jobbadmin based on a test user called "Britta Simon." + +For single sign-on to work, Azure AD needs to know what the counterpart user in Jobbadmin is to a user in Azure AD. In other words, a link relationship between an Azure AD user and the related user in Jobbadmin needs to be established. + +In Jobbadmin, assign the value of the **user name** in Azure AD as the value of the **Username** to establish the link relationship. + +To configure and test Azure AD single sign-on with Jobbadmin, you need to complete the following building blocks: + +1. **[Configuring Azure AD Single Sign-On](#configuring-azure-ad-single-sign-on)** - to enable your users to use this feature. +2. **[Creating an Azure AD test user](#creating-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon. +3. **[Creating a Jobbadmin test user](#creating-a-jobbadmin-test-user)** - to have a counterpart of Britta Simon in Jobbadmin that is linked to the Azure AD representation of user. +4. **[Assigning the Azure AD test user](#assigning-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on. +5. **[Testing Single Sign-On](#testing-single-sign-on)** - to verify whether the configuration works. + +### Configuring Azure AD single sign-on + +In this section, you enable Azure AD single sign-on in the Azure portal and configure single sign-on in your Jobbadmin application. + +**To configure Azure AD single sign-on with Jobbadmin, perform the following steps:** + +1. In the Azure portal, on the **Jobbadmin** application integration page, click **Single sign-on**. + + ![Configure Single Sign-On][4] + +2. On the **Single sign-on** dialog, select **Mode** as **SAML-based Sign-on** to enable single sign-on. + + ![Configure Single Sign-On](./media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_samlbase.png) + +3. On the **Jobbadmin Domain and URLs** section, perform the following steps: + + ![Configure Single Sign-On](./media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_url.png) + + a. In the **Sign-on URL** textbox, type a URL using the following pattern: `https://.jobbnorge.no/auth/saml2/login.ashx` + + b. In the **Identifier** textbox, type a URL using the following pattern: `https://.jobnorge.no` + + c. In the **Reply URL** textbox, type a URL using the following pattern: `https://.jobbnorge.no/auth/saml2/login.ashx` + + > [!NOTE] + > These values are not real. Update these values with the actual Sign-On URL and Identifier. Contact [Jobbadmin Client support team](https://www.jobbnorge.no/om-oss/kontakt-oss) to get these values. + + + +4. On the **SAML Signing Certificate** section, click **Metadata XML** and then save the metadata file on your computer. + + ![Configure Single Sign-On](./media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_certificate.png) + +5. Click **Save** button. + + ![Configure Single Sign-On](./media/active-directory-saas-jobbadmin-tutorial/tutorial_general_400.png) + +6. To configure single sign-on on **Jobbadmin** side, you need to send the downloaded **Metadata XML** to [Jobbadmin support team](https://www.jobbnorge.no/om-oss/kontakt-oss). They set this setting to have the SAML SSO connection set properly on both sides. + +> [!TIP] +> You can now read a concise version of these instructions inside the [Azure portal](https://portal.azure.com), while you are setting up the app! After adding this app from the **Active Directory > Enterprise Applications** section, simply click the **Single Sign-On** tab and access the embedded documentation through the **Configuration** section at the bottom. You can read more about the embedded documentation feature here: [Azure AD embedded documentation]( https://go.microsoft.com/fwlink/?linkid=845985) +> + +### Creating an Azure AD test user +The objective of this section is to create a test user in the Azure portal called Britta Simon. + +![Create Azure AD User][100] + +**To create a test user in Azure AD, perform the following steps:** + +1. In the **Azure portal**, on the left navigation pane, click **Azure Active Directory** icon. + + ![Creating an Azure AD test user](./media/active-directory-saas-jobbadmin-tutorial/create_aaduser_01.png) + +2. To display the list of users, go to **Users and groups** and click **All users**. + + ![Creating an Azure AD test user](./media/active-directory-saas-jobbadmin-tutorial/create_aaduser_02.png) + +3. To open the **User** dialog, click **Add** on the top of the dialog. + + ![Creating an Azure AD test user](./media/active-directory-saas-jobbadmin-tutorial/create_aaduser_03.png) + +4. On the **User** dialog page, perform the following steps: + + ![Creating an Azure AD test user](./media/active-directory-saas-jobbadmin-tutorial/create_aaduser_04.png) + + a. In the **Name** textbox, type **BrittaSimon**. + + b. In the **User name** textbox, type the **email address** of BrittaSimon. + + c. Select **Show Password** and write down the value of the **Password**. + + d. Click **Create**. + +### Creating a Jobbadmin test user + +To enable Azure AD users to log in to Jobbadmin, they must be provisioned into Jobbadmin. + +Please contact [Jobbadmin support team](https://www.jobbnorge.no/om-oss/kontakt-oss) to get the users added on their side. + +### Assigning the Azure AD test user + +In this section, you enable Britta Simon to use Azure single sign-on by granting access to Jobbadmin. + +![Assign User][200] + +**To assign Britta Simon to Jobbadmin, perform the following steps:** + +1. In the Azure portal, open the applications view, and then navigate to the directory view and go to **Enterprise applications** then click **All applications**. + + ![Assign User][201] + +2. In the applications list, select **Jobbadmin**. + + ![Configure Single Sign-On](./media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_app.png) + +3. In the menu on the left, click **Users and groups**. + + ![Assign User][202] + +4. Click **Add** button. Then select **Users and groups** on **Add Assignment** dialog. + + ![Assign User][203] + +5. On **Users and groups** dialog, select **Britta Simon** in the Users list. + +6. Click **Select** button on **Users and groups** dialog. + +7. Click **Assign** button on **Add Assignment** dialog. + +### Testing single sign-on + +In this section, you test your Azure AD single sign-on configuration using the Access Panel. + +When you click the Jobbadmin tile in the Access Panel, you should get login page of Jobbadmin application. +For more information about the Access Panel, see [Introduction to the Access Panel](active-directory-saas-access-panel-introduction.md). + +## Additional resources + +* [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](active-directory-saas-tutorial-list.md) +* [What is application access and single sign-on with Azure Active Directory?](active-directory-appssoaccess-whatis.md) + + + + + +[1]: ./media/active-directory-saas-jobbadmin-tutorial/tutorial_general_01.png +[2]: ./media/active-directory-saas-jobbadmin-tutorial/tutorial_general_02.png +[3]: ./media/active-directory-saas-jobbadmin-tutorial/tutorial_general_03.png +[4]: ./media/active-directory-saas-jobbadmin-tutorial/tutorial_general_04.png + +[100]: ./media/active-directory-saas-jobbadmin-tutorial/tutorial_general_100.png + +[200]: ./media/active-directory-saas-jobbadmin-tutorial/tutorial_general_200.png +[201]: ./media/active-directory-saas-jobbadmin-tutorial/tutorial_general_201.png +[202]: ./media/active-directory-saas-jobbadmin-tutorial/tutorial_general_202.png +[203]: ./media/active-directory-saas-jobbadmin-tutorial/tutorial_general_203.png + diff --git a/articles/active-directory/application-provisioning-config-problem-no-users-provisioned.md b/articles/active-directory/application-provisioning-config-problem-no-users-provisioned.md index 4b90d38e984c1..b3c2fadca2cf9 100644 --- a/articles/active-directory/application-provisioning-config-problem-no-users-provisioned.md +++ b/articles/active-directory/application-provisioning-config-problem-no-users-provisioned.md @@ -20,7 +20,7 @@ ms.author: asteen # No users are being provisioned to an Azure AD Gallery application -Once automatic provisioning has been configured for an application (including verifying that the app credentials provided to Azure AD to connect to the app are valid). Then users and/or groups are provisioned to the app is determined by the following things: +Once automatic provisioning has been configured for an application (including verifying that the app credentials provided to Azure AD to connect to the app are valid). Then users and/or groups are provisioned to the app and is determined by the following things: - Which users and groups have been **assigned** to the application. For more information on assignment, see [Assign a user or group to an enterprise app in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-coreapps-assign-user-azure-portal). diff --git a/articles/active-directory/application-proxy-back-end-kerberos-constrained-delegation-how-to.md b/articles/active-directory/application-proxy-back-end-kerberos-constrained-delegation-how-to.md index 8874bd84d0651..f08939cc976de 100644 --- a/articles/active-directory/application-proxy-back-end-kerberos-constrained-delegation-how-to.md +++ b/articles/active-directory/application-proxy-back-end-kerberos-constrained-delegation-how-to.md @@ -43,7 +43,7 @@ For this reason, our advice is always to start by making sure you have met all t Particularly the section on configuring KCD on 2012R2, as this employs a fundamentally different approach to configuring KCD on previous versions of Windows, but also while being mindful of several other considerations: -- It is not uncommon for a domain member server to change open a secure channel dialog with a specific domain controller. Later change to another at any given time, so connector hosts should generally not be restricted to being able to communicate with only specific local site DCs. +- It is not uncommon for a domain member server to open a secure channel dialog with a specific domain controller. Then move to another dialog at any given time, so connector hosts should generally not be restricted to being able to communicate with only specific local site DCs. - Similar to the above point, cross domain scenarios rely on referrals that direct a connector host to DCs that may reside outside of the local network perimeter. In this scenario it is equally important to make sure you are also allowing traffic onwards to DCs that represent other respective domains, or else delegation fail. diff --git a/articles/active-directory/connect/active-directory-aadconnectsync-connector-genericsql.md b/articles/active-directory/connect/active-directory-aadconnectsync-connector-genericsql.md index 656a328701ef5..0b983183269aa 100644 --- a/articles/active-directory/connect/active-directory-aadconnectsync-connector-genericsql.md +++ b/articles/active-directory/connect/active-directory-aadconnectsync-connector-genericsql.md @@ -13,7 +13,7 @@ ms.workload: identity ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 05/11/2017 +ms.date: 06/2/2017 ms.author: billmath --- diff --git a/articles/active-directory/connect/media/active-directory-aadconnectsync-connector-genericsql/runstep8.png b/articles/active-directory/connect/media/active-directory-aadconnectsync-connector-genericsql/runstep8.png index b4593a6f4bdd1..ea2d4a6d569d9 100644 Binary files a/articles/active-directory/connect/media/active-directory-aadconnectsync-connector-genericsql/runstep8.png and b/articles/active-directory/connect/media/active-directory-aadconnectsync-connector-genericsql/runstep8.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/create_aaduser_01.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/create_aaduser_01.png new file mode 100644 index 0000000000000..b0855745d721c Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/create_aaduser_01.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/create_aaduser_02.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/create_aaduser_02.png new file mode 100644 index 0000000000000..454d3fe8fb829 Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/create_aaduser_02.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/create_aaduser_03.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/create_aaduser_03.png new file mode 100644 index 0000000000000..bfa3f1d0128a3 Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/create_aaduser_03.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/create_aaduser_04.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/create_aaduser_04.png new file mode 100644 index 0000000000000..2e4c07b753542 Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/create_aaduser_04.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_attribute_03.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_attribute_03.png new file mode 100644 index 0000000000000..83e23cd5f6946 Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_attribute_03.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_attribute_04.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_attribute_04.png new file mode 100644 index 0000000000000..5cfffcd5eb0fe Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_attribute_04.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_attribute_05.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_attribute_05.png new file mode 100644 index 0000000000000..d9e411bf89468 Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_attribute_05.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_01.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_01.png new file mode 100644 index 0000000000000..b0855745d721c Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_01.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_02.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_02.png new file mode 100644 index 0000000000000..754e9852c908f Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_02.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_03.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_03.png new file mode 100644 index 0000000000000..4e3df4d23b418 Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_03.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_04.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_04.png new file mode 100644 index 0000000000000..96781e5e9013c Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_04.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_100.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_100.png new file mode 100644 index 0000000000000..c303e20b07947 Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_100.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_200.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_200.png new file mode 100644 index 0000000000000..f99e5925ce74f Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_200.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_201.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_201.png new file mode 100644 index 0000000000000..81b0e70bf7fb8 Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_201.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_202.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_202.png new file mode 100644 index 0000000000000..42d212fc29f29 Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_202.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_203.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_203.png new file mode 100644 index 0000000000000..43a377c22a0c8 Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_203.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_400.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_400.png new file mode 100644 index 0000000000000..5b9c617255184 Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_general_400.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_addfromgallery.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_addfromgallery.png new file mode 100644 index 0000000000000..218c619f8787f Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_addfromgallery.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_app.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_app.png new file mode 100644 index 0000000000000..8bf55ced16483 Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_app.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_certificate.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_certificate.png new file mode 100644 index 0000000000000..577c571b129e3 Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_certificate.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_configure.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_configure.png new file mode 100644 index 0000000000000..7ef85fbcc1c08 Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_configure.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_samlbase.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_samlbase.png new file mode 100644 index 0000000000000..622682159cd5a Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_samlbase.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_search.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_search.png new file mode 100644 index 0000000000000..8d7aba3e45d18 Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_search.png differ diff --git a/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_url.png b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_url.png new file mode 100644 index 0000000000000..d1fb1cee3b541 Binary files /dev/null and b/articles/active-directory/media/active-directory-saas-jobbadmin-tutorial/tutorial_jobbadmin_url.png differ diff --git a/articles/analysis-services/analysis-services-backup.md b/articles/analysis-services/analysis-services-backup.md index c77092c124899..1cb3b1a6f7ddb 100644 --- a/articles/analysis-services/analysis-services-backup.md +++ b/articles/analysis-services/analysis-services-backup.md @@ -13,7 +13,7 @@ ms.workload: data-management ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 04/17/2017 +ms.date: 06/01/2017 ms.author: owend --- @@ -27,7 +27,7 @@ Backing up tabular model databases in Azure Analysis Services is much the same a > > -Backups are saved with a .abf extension. For in-memory tabular models, both model data and metadata are stored. For Direct Query tabular models, only model metadata is stored. Backups can be compressed and encrypted, depending on the options you choose. +Backups are saved with an abf extension. For in-memory tabular models, both model data and metadata are stored. For DirectQuery tabular models, only model metadata is stored. Backups can be compressed and encrypted, depending on the options you choose. @@ -50,7 +50,7 @@ Before backing up, you need to configure storage settings for your server. ![Select container](./media/analysis-services-backup/aas-backup-container.png) -5. Save your backup settings. You must save your changes whenever you change storage settings, or enable or disable backup. +5. Save your backup settings. ![Save backup settings](./media/analysis-services-backup/aas-backup-save.png) @@ -62,7 +62,7 @@ Before backing up, you need to configure storage settings for your server. 2. In **Backup Database** > **Backup file**, click **Browse**. -3. In the **Save file as** dialog, verify the folder path, and then type a name for the backup file. By default, the file name is given a .abf extension. +3. In the **Save file as** dialog, verify the folder path, and then type a name for the backup file. 4. In the **Backup Database** dialog, select options. @@ -84,7 +84,7 @@ When restoring, your backup file must be in the storage account you've configure > [!NOTE] -> If you're restoring a tabular model database from an on-premises SQL Server Analysis Services server, you must first remove all of the domain users from the model's roles, and add them back to the roles as Azure Active Directory users. The roles will be the same. +> If you're restoring from an on-premises server, you must remove all the domain users from the model's roles and add them back to the roles as Azure Active Directory users. > > @@ -109,5 +109,5 @@ Use [Restore-ASDatabase](https://docs.microsoft.com/sql/analysis-services/powers ## Related information [Azure storage accounts](../storage/storage-create-storage-account.md) -[High availablility](analysis-services-bcdr.md) +[High availability](analysis-services-bcdr.md) [Manage Azure Analysis Services](analysis-services-manage.md) diff --git a/articles/analysis-services/analysis-services-bcdr.md b/articles/analysis-services/analysis-services-bcdr.md index b239cfae60e2a..70a6158ffbef4 100644 --- a/articles/analysis-services/analysis-services-bcdr.md +++ b/articles/analysis-services/analysis-services-bcdr.md @@ -13,7 +13,7 @@ ms.workload: data-management ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 05/02/2017 +ms.date: 06/01/2017 ms.author: owend --- @@ -27,7 +27,7 @@ While rare, an Azure data center can have an outage. When an outage occurs, it c * Deploy models to redundant servers in other regions. This method requires processing data on both your primary server and redundant servers in-parallel, assuring all servers are in-sync. -* Backup databases from your primary server and restore on redundant servers. For example, you can automate nightly backups to Azure storage, and restore to other redundant servers in other regions. +* Back up databases from your primary server and restore on redundant servers. For example, you can automate nightly backups to Azure storage, and restore to other redundant servers in other regions. In either case, if your primary server experiences an outage, you must change the connection strings in reporting clients to connect to the server in a different regional datacenter. This change should be considered a last resort and only if a catastrophic regional data center outage occurs. It's more likely a data center outage hosting your primary server would come back online before you could update connections on all clients. diff --git a/articles/analysis-services/analysis-services-connect-excel.md b/articles/analysis-services/analysis-services-connect-excel.md index 5e61cf26016dc..f90c1d7a390f9 100644 --- a/articles/analysis-services/analysis-services-connect-excel.md +++ b/articles/analysis-services/analysis-services-connect-excel.md @@ -14,18 +14,18 @@ ms.devlang: NA ms.topic: article ms.tgt_pltfrm: NA ms.workload: na -ms.date: 05/09/2017 +ms.date: 06/01/2017 ms.author: owend --- # Connect with Excel -Once you've created a server in Azure, and deployed a tabular model to it, you're ready to connect and begin exploring data. +Once you've created a server in Azure, and deployed a tabular model to it, you're ready to connect and begin exploring data. ## Connect in Excel -Connecting to a server in Excel is supported by using Get Data in Excel 2016 or Power Query in earlier versions. Connecting by using the Import Table Wizard in Power Pivot is not supported. +Connecting to a server in Excel is supported by using Get Data in Excel 2016. Connecting by using the Import Table Wizard in Power Pivot is not supported. **To connect in Excel 2016** diff --git a/articles/analysis-services/analysis-services-connect-pbi.md b/articles/analysis-services/analysis-services-connect-pbi.md index c37cb34b2168f..2f6b53d82affc 100644 --- a/articles/analysis-services/analysis-services-connect-pbi.md +++ b/articles/analysis-services/analysis-services-connect-pbi.md @@ -14,7 +14,7 @@ ms.devlang: NA ms.topic: article ms.tgt_pltfrm: NA ms.workload: na -ms.date: 04/12/2017 +ms.date: 06/01/2017 ms.author: owend --- diff --git a/articles/analysis-services/analysis-services-connect.md b/articles/analysis-services/analysis-services-connect.md index d80d5d44bd3cb..457850e9076db 100644 --- a/articles/analysis-services/analysis-services-connect.md +++ b/articles/analysis-services/analysis-services-connect.md @@ -14,7 +14,7 @@ ms.devlang: NA ms.topic: article ms.tgt_pltfrm: NA ms.workload: na -ms.date: 04/17/2017 +ms.date: 06/01/2017 ms.author: owend --- diff --git a/articles/analysis-services/analysis-services-create-server.md b/articles/analysis-services/analysis-services-create-server.md index d334926f5b342..6e82ae17f15be 100644 --- a/articles/analysis-services/analysis-services-create-server.md +++ b/articles/analysis-services/analysis-services-create-server.md @@ -14,7 +14,7 @@ ms.devlang: NA ms.topic: article ms.tgt_pltfrm: NA ms.workload: na -ms.date: 05/16/2017 +ms.date: 06/01/2017 ms.author: owend --- diff --git a/articles/analysis-services/analysis-services-data-providers.md b/articles/analysis-services/analysis-services-data-providers.md index a4d42aa8a8d97..c850ba94cd183 100644 --- a/articles/analysis-services/analysis-services-data-providers.md +++ b/articles/analysis-services/analysis-services-data-providers.md @@ -14,7 +14,7 @@ ms.devlang: NA ms.topic: article ms.tgt_pltfrm: NA ms.workload: na -ms.date: 04/14/2016 +ms.date: 06/01/2016 ms.author: owend --- @@ -25,12 +25,12 @@ Client libraries are necessary for client applications and tools to connect to A Analysis Services utilize three client libraries. ADOMD.NET and Analysis Services Management Objects (AMO), are managed client libraries. The Analysis Services OLE DB provider (MSOLAP DLL) is a native client library. Typically, all three are installed at the same time. Azure Analysis Services requires the latest versions. -Microsoft client applications such as Power BI Desktop and Excel install all three client libraries. However, depending on the version of Excel, or whether or not newer versions of Excel and Power BI Desktop are updated monthly, the client libraries installed may not be updated to the latest versions required by Azure Analysis Service. The same applies to custom applications or other interfaces such as AsCmd, TOM, ADOMD.NET. These applications require manually installing the libraries. The client libraries for manual installation are included in SQL Server feature packs as distributable packages; however, these are tied to the SQL Server version and may not be the latest. +Microsoft client applications such as Power BI Desktop and Excel install all three client libraries. However, depending on the version or frequency of updates, client libraries may not be the latest versions required by Azure Analysis Services. The same applies to custom applications or other interfaces such as AsCmd, TOM, ADOMD.NET. These applications require manually installing the libraries. The client libraries for manual installation are included in SQL Server feature packs as distributable packages. However, these client libraries are tied to the SQL Server version and may not be the latest. Client libraries for client connections are different from data providers required to connect from an Azure Analysis Services server to a data source. To learn more about datasource connections, see [Datasource connections](analysis-services-datasource.md). ## Download the latest **preview** client libraries -Use the following client libraries to get the latest bug fixes and updates. These are recommended when connecting to Azure Analysis Services or SQL Server 2017 Analysis Services. +Use the following client libraries to get the latest bug fixes and updates. [MSOLAP (amd64) Preview](http://download.microsoft.com/download/4/8/2/482E5799-9B8E-4724-8A4C-F301BAE788EE/14.0.500.170/amd64/SQL_AS_OLEDB.msi)
[MSOLAP (x86) Preview](http://download.microsoft.com/download/4/8/2/482E5799-9B8E-4724-8A4C-F301BAE788EE/14.0.500.170/x86/SQL_AS_OLEDB.msi)
diff --git a/articles/analysis-services/analysis-services-datasource.md b/articles/analysis-services/analysis-services-datasource.md index f59449571b3b9..37ab8c586e743 100644 --- a/articles/analysis-services/analysis-services-datasource.md +++ b/articles/analysis-services/analysis-services-datasource.md @@ -14,12 +14,12 @@ ms.devlang: NA ms.topic: article ms.tgt_pltfrm: NA ms.workload: na -ms.date: 05/26/2017 +ms.date: 06/01/2017 ms.author: owend --- # Data sources supported in Azure Analysis Services -Azure Analysis Services servers support connecting to data sources in the cloud and on-premises in your organization. +Azure Analysis Services servers support connecting to data sources in the cloud and on-premises in your organization. Additional supported data sources are being added all the time. Check back often. The following data sources are currently supported: @@ -45,14 +45,11 @@ The following data sources are currently supported: > [!IMPORTANT] > Connecting to on-premises data sources require an [On-premises data gateway](analysis-services-gateway.md) installed on a computer in your environment. -> [!NOTE] -> Additional supported data sources are being added all the time. Check back often. - -## Datasource providers +## Data providers Data models in Azure Analysis Services may require different data providers when connecting to certain data sources. In some cases, tabular models connecting to data sources using native providers such as SQL Server Native Client (SQLNCLI11) may return an error. -For in-memory or DirectQuery data models that connect to a cloud data source such as Azure SQL Database, if you use native providers other than SQLOLEDB, you may see error message: **“The provider 'SQLNCLI11.1' is not registered”**. Or, if you have a DirectQuery model connecting to on-premises data sources, if you use native providers you may see error message: **“Error creating OLE DB row set. Incorrect syntax near 'LIMIT'”**. +For data models that connect to a cloud data source such as Azure SQL Database, if you use native providers other than SQLOLEDB, you may see error message: **“The provider 'SQLNCLI11.1' is not registered.”** Or, if you have a DirectQuery model connecting to on-premises data sources, if you use native providers you may see error message: **“Error creating OLE DB row set. Incorrect syntax near 'LIMIT'”**. The following datasource providers are supported for in-memory or DirectQuery data models when connecting to data sources in the cloud or on-premises: diff --git a/articles/analysis-services/analysis-services-deploy.md b/articles/analysis-services/analysis-services-deploy.md index 584b540d59e8f..18c56153c7d92 100644 --- a/articles/analysis-services/analysis-services-deploy.md +++ b/articles/analysis-services/analysis-services-deploy.md @@ -14,7 +14,7 @@ ms.devlang: NA ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na -ms.date: 04/17/2017 +ms.date: 06/01/2017 ms.author: owend --- @@ -24,12 +24,12 @@ Once you've created a server in your Azure subscription, you're ready to deploy ## Before you begin To get started, you need: -* **Analysis Services server** in Azure. To learn more, see [Create an Analysis Services in Azure](analysis-services-create-server.md). -* **Tabular model project** in SSDT or an existing tabular model at the 1200 or later compatibility level on an Analysis Services instance. Never created one? Try the [Adventure Works Tutorial](https://msdn.microsoft.com/library/hh231691.aspx). +* **Analysis Services server** in Azure. To learn more, see [Create an Azure Analysis Services server](analysis-services-create-server.md). +* **Tabular model project** in SSDT or an existing tabular model at the 1200 or later compatibility level. Never created one? Try the [Adventure Works Tutorial](https://msdn.microsoft.com/library/hh231691.aspx). * **On-premises gateway** - If one or more data sources are on-premises in your organization's network, you need to install an [On-premises data gateway](analysis-services-gateway.md). The gateway is necessary for your server in the cloud connect to your on-premises data sources to process and refresh data in the model. > [!TIP] -> Before you deploy, make sure you can process the data in your tables. In SSDT, click **Model** > **Process** > **Process All**. If processing fails, deploying will to. +> Before you deploy, make sure you can process the data in your tables. In SSDT, click **Model** > **Process** > **Process All**. If processing fails, you cannot successfully deploy. > > diff --git a/articles/analysis-services/tutorials/aas-adventure-works-tutorial.md b/articles/analysis-services/tutorials/aas-adventure-works-tutorial.md index bd597e1e1b495..78ebe8cd36c26 100644 --- a/articles/analysis-services/tutorials/aas-adventure-works-tutorial.md +++ b/articles/analysis-services/tutorials/aas-adventure-works-tutorial.md @@ -11,10 +11,10 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na -ms.date: 05/26/2017 +ms.date: 06/01/2017 ms.author: owend --- # Azure Analysis Services - Adventure Works tutorial @@ -23,9 +23,9 @@ ms.author: owend This tutorial provides lessons on how to create and deploy a tabular model at the 1400 compatibility level by using [SQL Server Data Tools (SSDT)](https://docs.microsoft.com/sql/ssdt/download-sql-server-data-tools-ssdt). -If you're new to Analysis Services and tabular modeling, completing this tutorial is the quickest way to learn how to create a basic tabular model and deploy it to a real Analysis Services server. Once you have all of the prerequisites in-place, it should take about two or three hours to complete. +If you're new to Analysis Services and tabular modeling, completing this tutorial is the quickest way to learn how to create and deploy a basic tabular model. Once you have the prerequisites in-place, it should take between two to three hours to complete. -## What you'll learn +## What you learn - How to create a new tabular model project at the **1400 compatibility level** in SSDT. @@ -44,7 +44,7 @@ If you're new to Analysis Services and tabular modeling, completing this tutoria - How to deploy a tabular model to an **Azure Analysis Services** server or an on-premises SQL Server 2017 Analysis Services server. ## Prerequisites -In order to complete this tutorial, you need the following: +To complete this tutorial, you need: - An Azure Analysis Services or SQL Server 2017 Analysis Services instance to deploy your model to. Sign up for a free [Azure Analysis Services trial](https://azure.microsoft.com/services/analysis-services/) and [create a server](../analysis-services-create-server.md). Or, sign up and download [SQL Server 2017 Community Technology Preview](https://www.microsoft.com/evalcenter/evaluate-sql-server-vnext-ctp). @@ -59,13 +59,13 @@ In order to complete this tutorial, you need the following: - A client application such as [Power BI Desktop](https://powerbi.microsoft.com/desktop/) or Excel. ## Scenario -This tutorial is based on Adventure Works Cycles, a fictitious company. Adventure Works is a large, multinational manufacturing company that produces and distributes metal and composite bicycles to commercial markets in North America, Europe, and Asia. With headquarters in Bothell, Washington, the company employs 500 workers. Additionally, Adventure Works employs several regional sales teams throughout its market base. You are tasked with creating a tabular model for sales and marketing users to analyze Internet sales data in the AdventureWorksDW sample database. +This tutorial is based on Adventure Works Cycles, a fictitious company. Adventure Works is a large, multinational manufacturing company that produces and distributes metal and composite bicycles to commercial markets in North America, Europe, and Asia. The company employs 500 workers. Additionally, Adventure Works employs several regional sales teams throughout its market base. Your project is to create a tabular model for sales and marketing users to analyze Internet sales data in the AdventureWorksDW database. -To complete the tutorial, you must complete a number of lessons. Within each lesson are a number of tasks; completing each task in order is necessary for completing the lesson. While in a particular lesson there may be several tasks that accomplish a similar outcome, but how you complete each task is slightly different. This is to show that there is often more than one way to complete a particular task, and to challenge you by using skills you've learned in previous lessons and tasks. +To complete the tutorial, you must complete various lessons. In each lesson, there are tasks. Completing each task in order is necessary for completing the lesson. While in a particular lesson there may be several tasks that accomplish a similar outcome, but how you complete each task is slightly different. This method shows there is often more than one way to complete a task, and to challenge you by using skills you've learned in previous lessons and tasks. -The purpose of the lessons is to guide you through authoring a basic tabular model running by using many of the features included in SSDT. Because each lesson builds upon the previous lesson, you should complete the lessons in order. Once you've completed all of the lessons, you will have authored and deployed the Adventure Works Internet Sales sample tabular model on an Analysis Services server. +The purpose of the lessons is to guide you through authoring a basic tabular model running by using many of the features included in SSDT. Because each lesson builds upon the previous lesson, you should complete the lessons in order. -This tutorial does not provide lessons or information about managing an Azure Analysis Services server in Azure portal, managing a server or deployed database by using SQL Server Management Studio (SSMS), or using a reporting client application to connect to a deployed model to browse model data. +This tutorial does not provide lessons about managing a server in Azure portal, managing a server or database by using SSMS, or using a client application to browse model data. ## Lessons diff --git a/articles/analysis-services/tutorials/aas-lesson-1-create-a-new-tabular-model-project.md b/articles/analysis-services/tutorials/aas-lesson-1-create-a-new-tabular-model-project.md index 86bf9477fc7e1..08dffde0f0d60 100644 --- a/articles/analysis-services/tutorials/aas-lesson-1-create-a-new-tabular-model-project.md +++ b/articles/analysis-services/tutorials/aas-lesson-1-create-a-new-tabular-model-project.md @@ -11,22 +11,22 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na -ms.date: 05/26/2017 +ms.date: 06/01/2017 ms.author: owend --- # Lesson 1: Create a new tabular model project [!INCLUDE[analysis-services-appliesto-aas-sql2017-later](../../../includes/analysis-services-appliesto-aas-sql2017-later.md)] -In this lesson, you will use SQL Server Data Tool (SSDT) to create a new tabular model project at the 1400 compatibility level. Once your new project is created, you can begin adding data and authoring your model. This lesson also gives you a brief introduction to the tabular model authoring environment in SSDT. +In this lesson, you use SQL Server Data Tools (SSDT) to create a new tabular model project at the 1400 compatibility level. Once your new project is created, you can begin adding data and authoring your model. This lesson also gives you a brief introduction to the tabular model authoring environment in SSDT. Estimated time to complete this lesson: **10 minutes** ## Prerequisites -This topic is the first lesson in a tabular model authoring tutorial. To complete this lesson, there are a number of prerequisites you need to have in-place. To learn more, see [Azure Analysis Services - Adventure Works tutorial](../tutorials/aas-adventure-works-tutorial.md). +This topic is the first lesson in a tabular model authoring tutorial. To complete this lesson, there are several prerequisites you need to have in-place. To learn more, see [Azure Analysis Services - Adventure Works tutorial](../tutorials/aas-adventure-works-tutorial.md). ## Create a new tabular model project @@ -38,13 +38,13 @@ This topic is the first lesson in a tabular model authoring tutorial. To complet 3. In **Name**, type **AW Internet Sales**, and then specify a location for the project files. - By default, **Solution Name** will be the same as the project name; however, you can type a different solution name. + By default, **Solution Name** is the same as the project name; however, you can type a different solution name. 4. Click **OK**. 5. In the **Tabular model designer** dialog box, select **Integrated workspace**. - The workspace will host a tabular model database with the same name as the project during model authoring. Integrated workspace means SSDT will use a built-in instance, eliminating the need to install a separate Analysis Services server instance just for model authoring. + The workspace hosts a tabular model database with the same name as the project during model authoring. Integrated workspace means SSDT uses a built-in instance, eliminating the need to install a separate Analysis Services server instance just for model authoring. 6. In **Compatibility level**, select **SQL Server 2017 / Azure Analysis Services (1400)**. @@ -56,25 +56,25 @@ This topic is the first lesson in a tabular model authoring tutorial. To complet ## Understanding the SSDT tabular model authoring environment Now that you’ve created a new tabular model project, let’s take a moment to explore the tabular model authoring environment in SSDT. -After your project is created, it opens in SSDT. On the right side, in **Tabular Model Explorer**, you'll see a tree view of the objects in your model. Since you haven't yet imported data, the folders will be empty. You can right-click an object folder to perform actions, similar to the menu bar. As you step through this tutorial, you'll use the Tabular Model Explorer to navigate different objects in your model project. +After your project is created, it opens in SSDT. On the right side, in **Tabular Model Explorer**, you see a tree view of the objects in your model. Since you haven't yet imported data, the folders are empty. You can right-click an object folder to perform actions, similar to the menu bar. As you step through this tutorial, you use the Tabular Model Explorer to navigate different objects in your model project. ![aas-lesson1-tme](../tutorials/media/aas-lesson1-tme.png) -Click the **Solution Explorer** tab. Here, you'll see your **Model.bim** file. If you don’t see the designer window to the left (the empty window with the Model.bim tab), in **Solution Explorer**, under **AW Internet Sales Project**, double-click the **Model.bim** file. The Model.bim file contains all of the metadata for your model project. +Click the **Solution Explorer** tab. Here, you see your **Model.bim** file. If you don’t see the designer window to the left (the empty window with the Model.bim tab), in **Solution Explorer**, under **AW Internet Sales Project**, double-click the **Model.bim** file. The Model.bim file contains the metadata for your model project. ![aas-lesson1-se](../tutorials/media/aas-lesson1-se.png) -Click **Model.bim**. In the **Properties** window, you'll see the model properties, most important of which is the **DirectQuery Mode** property. This property specifies whether or not the model is deployed in In-Memory mode (Off) or DirectQuery mode (On). For this tutorial, you will author and deploy your model in In-Memory mode. +Click **Model.bim**. In the **Properties** window, you see the model properties, most important of which is the **DirectQuery Mode** property. This property specifies if the model is deployed in In-Memory mode (Off) or DirectQuery mode (On). For this tutorial, you author and deploy your model in In-Memory mode. ![aas-lesson1-properties](../tutorials/media/aas-lesson1-properties.png) -When you create a new model, certain model properties are set automatically according to the Data Modeling settings that can be specified in the **Tools** menu > **Options** dialog box. Data Backup, Workspace Retention, and Workspace Server properties specify how and where the workspace database (your model authoring database) is backed up, retained in-memory, and built. You can change these settings later if necessary, but for now, just leave these properties as they are. +When you create a model project, certain model properties are set automatically according to the Data Modeling settings that can be specified in the **Tools** menu > **Options** dialog box. Data Backup, Workspace Retention, and Workspace Server properties specify how and where the workspace database (your model authoring database) is backed up, retained in-memory, and built. You can change these settings later if necessary, but for now, leave these properties as they are. -In **Solution Explorer**, right-click **AW Internet Sales** (project), and then click **Properties**. The **AW Internet Sales Property Pages** dialog box appears. These are the advanced project properties. You will set some of these properties later when you deploy your model. +In **Solution Explorer**, right-click **AW Internet Sales** (project), and then click **Properties**. The **AW Internet Sales Property Pages** dialog box appears. You set some of these properties later when you deploy your model. -When you installed SSDT, several new menu items were added to the Visual Studio environment. Let’s look at those specific to authoring tabular models. Click on the **Model** menu. From here, you can import data, refresh workspace data, browse your model in Excel with the Analyze in Excel feature, create perspectives and roles, select the model view, and set calculation options. Click on the **Table** menu. From here you can create and manage relationships between tables, specify date table settings, create partitions, and edit table properties. If you click on the **Column** menu, you can add and delete columns in a table, freeze columns, and specify sort order. SSDT also adds some buttons to the bar. Most useful is the AutoSum feature to create a standard aggregation measure for a selected column. Other toolbar buttons provide quick access to frequently used features and commands. +When you installed SSDT, several new menu items were added to the Visual Studio environment. Click the **Model** menu. From here, you can import data, refresh workspace data, browse your model in Excel, create perspectives and roles, select the model view, and set calculation options. Click the **Table** menu. From here, you can create and manage relationships, specify date table settings, create partitions, and edit table properties. If you click the **Column** menu, you can add and delete columns in a table, freeze columns, and specify sort order. SSDT also adds some buttons to the bar. Most useful is the AutoSum feature to create a standard aggregation measure for a selected column. Other toolbar buttons provide quick access to frequently used features and commands. -Explore some of the dialogs and locations for various features specific to authoring tabular models. While some items will not yet be active, you can get a good idea of the tabular model authoring environment. +Explore some of the dialogs and locations for various features specific to authoring tabular models. While some items are not yet active, you can get a good idea of the tabular model authoring environment. ## What's next? diff --git a/articles/analysis-services/tutorials/aas-lesson-10-create-partitions.md b/articles/analysis-services/tutorials/aas-lesson-10-create-partitions.md index e47aa82b23a24..546edcfa150ce 100644 --- a/articles/analysis-services/tutorials/aas-lesson-10-create-partitions.md +++ b/articles/analysis-services/tutorials/aas-lesson-10-create-partitions.md @@ -11,7 +11,7 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na ms.date: 05/26/2017 @@ -21,7 +21,7 @@ ms.author: owend [!INCLUDE[analysis-services-appliesto-aas-sql2017-later](../../../includes/analysis-services-appliesto-aas-sql2017-later.md)] -In this lesson, you create partitions to divide the FactInternetSales table into smaller logical parts that can be processed (refreshed) independent of other partitions. By default, every table you include in your model has one partition, which includes all of the table’s columns and rows. For the FactInternetSales table, we want to divide the data by year; one partition for each of the table’s five years. Each partition can then be processed independently. To learn more, see [Partitions](https://docs.microsoft.com/sql/analysis-services/tabular-models/partitions-ssas-tabular). +In this lesson, you create partitions to divide the FactInternetSales table into smaller logical parts that can be processed (refreshed) independent of other partitions. By default, every table you include in your model has one partition, which includes all the table’s columns and rows. For the FactInternetSales table, we want to divide the data by year; one partition for each of the table’s five years. Each partition can then be processed independently. To learn more, see [Partitions](https://docs.microsoft.com/sql/analysis-services/tabular-models/partitions-ssas-tabular). Estimated time to complete this lesson: **15 minutes** @@ -48,9 +48,9 @@ This topic is part of a tabular modeling tutorial, which should be completed in ![aas-lesson10-filter-rows](../tutorials/media/aas-lesson10-filter-rows.png) - Notice in Query Editor, in APPLIED STEPS, you see another step named Filtered Rows; this is the filter you applied to select only order dates from 2010. + Notice in Query Editor, in APPLIED STEPS, you see another step named Filtered Rows. This filter is to select only order dates from 2010. -8. Click **Import** to run the query. +8. Click **Import**. In Partition Manager, notice the query expression now has an additional Filtered Rows clause. @@ -108,7 +108,7 @@ In Partition Manager, notice the **Last Processed** column for each of the new p If you're prompted for Impersonation credentials, enter the Windows user name and password you specified in Lesson 2. - The **Data Processing** dialog box appears and displays process details for each partition. Notice that a different number of rows for each partition are transferred. This is because each partition includes only those rows for the year specified in the WHERE clause in the SQL Statement. When processing is finished, go ahead and close the Data Processing dialog box. + The **Data Processing** dialog box appears and displays process details for each partition. Notice that a different number of rows for each partition are transferred. Each partition includes only those rows for the year specified in the WHERE clause in the SQL Statement. When processing is finished, go ahead and close the Data Processing dialog box. ![aas-lesson10-process-complete](../tutorials/media/aas-lesson10-process-complete.png) diff --git a/articles/analysis-services/tutorials/aas-lesson-11-create-roles.md b/articles/analysis-services/tutorials/aas-lesson-11-create-roles.md index 13402fb4db824..3300ce5b57963 100644 --- a/articles/analysis-services/tutorials/aas-lesson-11-create-roles.md +++ b/articles/analysis-services/tutorials/aas-lesson-11-create-roles.md @@ -11,7 +11,7 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na ms.date: 05/26/2017 @@ -21,20 +21,20 @@ ms.author: owend [!INCLUDE[analysis-services-appliesto-aas-sql2017-later](../../../includes/analysis-services-appliesto-aas-sql2017-later.md)] -In this lesson, you create roles. Roles provide model database object and data security by limiting access to only thoseSa users which are role members. Each role is defined with a single permission: None, Read, Read and Process, Process, or Administrator. Roles can be defined during model authoring by using Role Manager. After a model has been deployed, you can manage roles by using SQL Server Management Studio (SSMS). To learn more, see [Roles](https://docs.microsoft.com/sql/analysis-services/tabular-models/roles-ssas-tabular). +In this lesson, you create roles. Roles provide model database object and data security by limiting access to only those users that are role members. Each role is defined with a single permission: None, Read, Read and Process, Process, or Administrator. Roles can be defined during model authoring by using Role Manager. After a model has been deployed, you can manage roles by using SQL Server Management Studio (SSMS). To learn more, see [Roles](https://docs.microsoft.com/sql/analysis-services/tabular-models/roles-ssas-tabular). > [!NOTE] -> Creating roles is not necessary to complete this tutorial. By default, the account you are currently logged in with will have Administrator privileges on the model. However, to allow other users in your organization to browse the model by using a reporting client, you must create at least one role with Read permissions and add those users as members. +> Creating roles is not necessary to complete this tutorial. By default, the account you are currently logged in with has Administrator privileges on the model. However, for other users in your organization to browse by using a reporting client, you must create at least one role with Read permissions and add those users as members. -You will create three roles: +You create three roles: - **Sales Manager** – This role can include users in your organization for which you want to have Read permission to all model objects and data. -- **Sales Analyst US** – This role can include users in your organization for which you want only to be able to browse data related to sales in the United States. For this role, you will use a DAX formula to define a *Row Filter*, which restricts members to browse data only for the United States. +- **Sales Analyst US** – This role can include users in your organization for which you want only to be able to browse data related to sales in the United States. For this role, you use a DAX formula to define a *Row Filter*, which restricts members to browse data only for the United States. - **Administrator** – This role can include users for which you want to have Administrator permission, which allows unlimited access and permissions to perform administrative tasks on the model database. -Because Windows user and group accounts in your organization are unique, you can add accounts from your particular organization to members. However, for this tutorial, you can also leave the members blank. You will still be able to test the effect of each role later in Lesson 12: Analyze in Excel. +Because Windows user and group accounts in your organization are unique, you can add accounts from your particular organization to members. However, for this tutorial, you can also leave the members blank. You test the effect of each role later in Lesson 12: Analyze in Excel. Estimated time to complete this lesson: **15 minutes** @@ -49,7 +49,7 @@ This topic is part of a tabular modeling tutorial, which should be completed in 2. In Role Manager, click **New**. -3. Click on the new role, and then in the **Name** column, rename the role to **Sales Manager**. +3. Click the new role, and then in the **Name** column, rename the role to **Sales Manager**. 4. In the **Permissions** column, click the dropdown list, and then select the **Read** permission. @@ -65,16 +65,16 @@ This topic is part of a tabular modeling tutorial, which should be completed in 3. Give this role **Read** permission. -4. Click on the Row Filters tab, and then for the **DimGeography** table only, in the DAX Filter column, type the following formula: +4. Click the Row Filters tab, and then for the **DimGeography** table only, in the DAX Filter column, type the following formula: ```Administrator =DimGeography[CountryRegionCode] = "US" ``` - A Row Filter formula must resolve to a Boolean (TRUE/FALSE) value. With this formula, you are specifying that only rows with the Country Region Code value of “US” be visible to the user. + A Row Filter formula must resolve to a Boolean (TRUE/FALSE) value. With this formula, you are specifying that only rows with the Country Region Code value of “US” are visible to the user. ![aas-lesson11-role-filter](../tutorials/media/aas-lesson11-role-filter.png) -6. Optional: Click on the **Members** tab, and then click **Add**. In the **Select Users or Groups** dialog box, enter the Windows users or groups from your organization you want to include in the role. +6. Optional: Click the **Members** tab, and then click **Add**. In the **Select Users or Groups** dialog box, enter the Windows users or groups from your organization you want to include in the role. #### To create an Administrator user role @@ -84,7 +84,7 @@ This topic is part of a tabular modeling tutorial, which should be completed in 3. Give this role **Administrator** permission. -4. Optional: Click on the **Members** tab, and then click **Add**. In the **Select Users or Groups** dialog box, enter the Windows users or groups from your organization you want to include in the role. +4. Optional: Click the **Members** tab, and then click **Add**. In the **Select Users or Groups** dialog box, enter the Windows users or groups from your organization you want to include in the role. ## What's next? diff --git a/articles/analysis-services/tutorials/aas-lesson-12-analyze-in-excel.md b/articles/analysis-services/tutorials/aas-lesson-12-analyze-in-excel.md index 0f6b499b1bf2c..d5acc348cb45f 100644 --- a/articles/analysis-services/tutorials/aas-lesson-12-analyze-in-excel.md +++ b/articles/analysis-services/tutorials/aas-lesson-12-analyze-in-excel.md @@ -11,7 +11,7 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na ms.date: 05/26/2017 @@ -21,11 +21,11 @@ ms.author: owend [!INCLUDE[analysis-services-appliesto-aas-sql2017-later](../../../includes/analysis-services-appliesto-aas-sql2017-later.md)] -In this lesson, you use the Analyze in Excel feature in SSDT to open Microsoft Excel, automatically create a data source connection to the model workspace, and automatically add a PivotTable to the worksheet. The Analyze in Excel feature is meant to provide a quick and easy way to test the efficacy of your model design prior to deploying your model. You will not perform any data analysis in this lesson. The purpose of this lesson is to familiarize you, the model author, with the tools you can use to test your model design. Unlike using the Analyze in Excel feature, which is meant for model authors, end-users will use client reporting applications like Excel or Power BI to connect to and browse deployed model data. +In this lesson, you use the Analyze in Excel feature to open Microsoft Excel, automatically create a connection to the model workspace, and automatically add a PivotTable to the worksheet. The Analyze in Excel feature is meant to provide a quick and easy way to test the efficacy of your model design prior to deploying your model. You do not perform any data analysis in this lesson. The purpose of this lesson is to familiarize you, the model author, with the tools you can use to test your model design. -In order to complete this lesson, Excel must be installed on the same computer as SSDT. +To complete this lesson, Excel must be installed on the same computer as SSDT. -Estimated time to complete this lesson: **5 minutes** +Estimated time to complete this lesson: **Five minutes** ## Prerequisites This topic is part of a tabular modeling tutorial, which should be completed in order. Before performing the tasks in this lesson, you should have completed the previous lesson: [Lesson 11: Create roles](../tutorials/aas-lesson-11-create-roles.md). @@ -39,9 +39,9 @@ In these first tasks, you browse your model by using both the default perspectiv 2. In the **Analyze in Excel** dialog box, click **OK**. - Excel will open with a new workbook. A data source connection is created using the current user account and the Default perspective is used to define viewable fields. A PivotTable is automatically added to the worksheet. + Excel opens with a new workbook. A data source connection is created using the current user account and the Default perspective is used to define viewable fields. A PivotTable is automatically added to the worksheet. -3. In Excel, in the **PivotTable Field List**, notice the **DimDate** and **FactInternetSales** measure groups appear, as well as the **DimCustomer**, **DimDate**, **DimGeography**, **DimProduct**, **DimProductCategory**, **DimProductSubcategory**, and **FactInternetSales** tables with all of their respective columns appear. +3. In Excel, in the **PivotTable Field List**, notice the **DimDate** and **FactInternetSales** measure groups appear. The **DimCustomer**, **DimDate**, **DimGeography**, **DimProduct**, **DimProductCategory**, **DimProductSubcategory**, and **FactInternetSales** tables with their respective columns also appear. 4. Close Excel without saving the workbook. @@ -60,15 +60,15 @@ In these first tasks, you browse your model by using both the default perspectiv 4. Close Excel without saving the workbook. ## Browse by using roles -Roles are an integral part of any tabular model. Without at least one role to which users are added as members, users will not be able to access and analyze data using your model. The Analyze in Excel feature provides a way for you to test the roles you have defined. +Roles are an important part of any tabular model. Without at least one role to which users are added as members, users cannot access and analyze data using your model. The Analyze in Excel feature provides a way for you to test the roles you have defined. #### To browse by using the Sales Manager user role 1. In SSDT, click the **Model** menu, and then click **Analyze in Excel**. -2. In the **Analyze in Excel** dialog box, in **Specify the user name or role to use to connect to the model**, select **Role**, and then in the drop-down listbox, select **Sales Manager**, and then click **OK**. +2. In **Specify the user name or role to use to connect to the model**, select **Role**, and then in the drop-down listbox, select **Sales Manager**, and then click **OK**. - Excel will open with a new workbook. A PivotTable is automatically created. The Pivot Table Field List includes all of the data fields available in your new model. + Excel opens with a new workbook. A PivotTable is automatically created. The Pivot Table Field List includes all the data fields available in your new model. 3. Close Excel without saving the workbook. diff --git a/articles/analysis-services/tutorials/aas-lesson-13-deploy.md b/articles/analysis-services/tutorials/aas-lesson-13-deploy.md index d286f46f4c609..9e8b44a2cb539 100644 --- a/articles/analysis-services/tutorials/aas-lesson-13-deploy.md +++ b/articles/analysis-services/tutorials/aas-lesson-13-deploy.md @@ -11,7 +11,7 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na ms.date: 05/26/2017 @@ -23,7 +23,7 @@ ms.author: owend In this lesson, you configure deployment properties; specifying an Analysis Services server in Azure, or a SQL Server vNext Analysis Services server on-premises, and a name for the model. You then deploy the model to that instance. After your model is deployed, users can connect to it by using a reporting client application. To learn more, see [Deploy to Azure Analysis Services](https://docs.microsoft.com/azure/analysis-services/analysis-services-deploy). -Estimated time to complete this lesson: **5 minutes** +Estimated time to complete this lesson: **Five minutes** ## Prerequisites This topic is part of a tabular modeling tutorial, which should be completed in order. Before performing the tasks in this lesson, you should have completed the previous lesson: [Lesson 12: Analyze in Excel](../tutorials/aas-lesson-12-analyze-in-excel.md). diff --git a/articles/analysis-services/tutorials/aas-lesson-2-Get-data.md b/articles/analysis-services/tutorials/aas-lesson-2-Get-data.md index 0ff44328f3063..9ef8d40b0438d 100644 --- a/articles/analysis-services/tutorials/aas-lesson-2-Get-data.md +++ b/articles/analysis-services/tutorials/aas-lesson-2-Get-data.md @@ -11,10 +11,10 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na -ms.date: 05/26/2017 +ms.date: 06/01/2017 ms.author: owend --- @@ -58,11 +58,11 @@ This topic is part of a tabular modeling tutorial, which should be completed in ![aas-lesson2-select-tables](../tutorials/media/aas-lesson2-select-tables.png) -After you click OK, Query Editor will open where, in the next section, you filter the data you want to import. +After you click OK, Query Editor opens. In the next section, you select only the data you want to import. ## Filter the table data -Tables in the AdventureWorksDW2014 sample database have data that isn't necessary to include in your model. When possible, you want to filter out data that will not be used to save in-memory space used by the model. You will filter out some of the columns from tables so they're not imported into the workspace database, or the model database after it has been deployed. +Tables in the AdventureWorksDW2014 sample database have data that isn't necessary to include in your model. When possible, you want to filter out unnecessary data to save in-memory space used by the model. You filter out some of the columns from tables so they're not imported into the workspace database, or the model database after it has been deployed. #### To filter the table data before importing @@ -132,7 +132,7 @@ Tables in the AdventureWorksDW2014 sample database have data that isn't necessar |**ShipDateKey**| ## Import the selected tables and column data -Now that you've previewed and filtered out unnecessary data, you can import the rest of the data you do want. The wizard imports the table data along with any relationships between tables. New tables and columns are created in the model and data that you filtered out will not be imported. +Now that you've previewed and filtered out unnecessary data, you can import the rest of the data you do want. The wizard imports the table data along with any relationships between tables. New tables and columns are created in the model and data that you filtered out is not be imported. #### To import the selected tables and column data diff --git a/articles/analysis-services/tutorials/aas-lesson-3-mark-as-date-table.md b/articles/analysis-services/tutorials/aas-lesson-3-mark-as-date-table.md index fe91a950b5acb..e7fd753eb7453 100644 --- a/articles/analysis-services/tutorials/aas-lesson-3-mark-as-date-table.md +++ b/articles/analysis-services/tutorials/aas-lesson-3-mark-as-date-table.md @@ -11,10 +11,10 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na -ms.date: 05/26/2017 +ms.date: 06/01/2017 ms.author: owend --- # Lesson 3: Mark as Date Table @@ -23,13 +23,13 @@ ms.author: owend In Lesson 2: Get data, you imported a dimension table named DimDate. While in your model this table is named DimDate, it can also be known as a *Date table*, in that it contains date and time data. -Whenever you use DAX time-intelligence functions in calculations, as you will do when you create measures a little later, you must specify date table properties, which include a *Date table* and a unique identifier *Date column* in that table. +Whenever you use DAX time-intelligence functions, like when you create measures later, you must specify properties which include a *Date table* and a unique identifier *Date column* in that table. In this lesson, you mark the DimDate table as the *Date table* and the Date column (in the Date table) as the *Date column* (unique identifier). -Before you mark the date table and date column, it's a good time to do a little housekeeping to make your model easier to understand. Notice in the DimDate table a column named **FullDateAlternateKey**; it contains one row for every day in each calendar year included in the table. You will be using this column a lot in measure formulas and in reports. But, FullDateAlternateKey isn't really a good identifier for this column. You will rename it to **Date**, making it easier to identify and include in formulas. Whenever possible, it's a good idea to rename objects like tables and columns to make them easier to identify in SSDT and client reporting applications like Power BI and Excel. +Before you mark the date table and date column, it's a good time to do a little housekeeping to make your model easier to understand. Notice in the DimDate table a column named **FullDateAlternateKey**. This column contains one row for every day in each calendar year included in the table. You use this column a lot in measure formulas and in reports. But, FullDateAlternateKey isn't really a good identifier for this column. You rename it to **Date**, making it easier to identify and include in formulas. Whenever possible, it's a good idea to rename objects like tables and columns to make them easier to identify in SSDT and client reporting applications like Power BI and Excel. -Estimated time to complete this lesson: **3 minutes** +Estimated time to complete this lesson: **Three minutes** ## Prerequisites This topic is part of a tabular modeling tutorial, which should be completed in order. Before performing the tasks in this lesson, you should have completed the previous lesson: [Lesson 2: Get data](../tutorials/aas-lesson-2-get-data.md). @@ -38,7 +38,7 @@ This topic is part of a tabular modeling tutorial, which should be completed in 1. In the model designer, click the **DimDate** table. -2. Double click the header for the **FullDateAlternateKey** column, and then rename it to **Date**. +2. Double-click the header for the **FullDateAlternateKey** column, and then rename it to **Date**. ### To set Mark as Date Table @@ -47,7 +47,7 @@ This topic is part of a tabular modeling tutorial, which should be completed in 2. Click the **Table** menu, then click **Date**, and then click **Mark as Date Table**. -3. In the **Mark as Date Table** dialog box, in the **Date** listbox, select the **Date** column as the unique identifier. It will usually be selected by default. Click **OK**. +3. In the **Mark as Date Table** dialog box, in the **Date** listbox, select the **Date** column as the unique identifier. It's usually selected by default. Click **OK**. ![aas-lesson3-date-table](../tutorials/media/aas-lesson3-date-table.png) diff --git a/articles/analysis-services/tutorials/aas-lesson-4-create-relationships.md b/articles/analysis-services/tutorials/aas-lesson-4-create-relationships.md index 0b763d89e52e9..0bba86a0b4889 100644 --- a/articles/analysis-services/tutorials/aas-lesson-4-create-relationships.md +++ b/articles/analysis-services/tutorials/aas-lesson-4-create-relationships.md @@ -11,7 +11,7 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na ms.date: 05/26/2017 @@ -29,26 +29,26 @@ Estimated time to complete this lesson: **10 minutes** This topic is part of a tabular modeling tutorial, which should be completed in order. Before performing the tasks in this lesson, you should have completed the previous lesson: [Lesson 3: Mark as Date Table](../tutorials/aas-lesson-3-mark-as-date-table.md). ## Review existing relationships and add new relationships -When you imported data by using Get Data, you got seven tables from the AdventureWorksDW2014 database. Generally, when you import data from a relational source, existing relationships are automatically imported together with the data. However, before you proceed with authoring your model you should verify those relationships between tables were created properly. For this tutorial, you will also add three new relationships. +When you imported data by using Get Data, you got seven tables from the AdventureWorksDW2014 database. Generally, when you import data from a relational source, existing relationships are automatically imported together with the data. However, before you proceed with authoring your model you should verify those relationships between tables were created properly. For this tutorial, you add three new relationships. #### To review existing relationships 1. Click the **Model** menu > **Model View** > **Diagram View**. - The model designer now appears in Diagram View, a graphical format displaying all of the tables you imported with lines between them. The lines between tables indicate the relationships that were automatically created when you imported the data. + The model designer now appears in Diagram View, a graphical format displaying all the tables you imported with lines between them. The lines between tables indicate the relationships that were automatically created when you imported the data. ![aas-lesson4-diagram](../tutorials/media/aas-lesson4-diagram.png) - Use the minimap controls in the lower-right corner of the model designer to adjust the view to include as many of the tables as possible. You can also click and drag tables to different locations, bringing tables closer together, or putting them in a particular order. Moving tables does not affect the relationships already between the tables. To view all of the columns in a particular table, click and drag on a table edge to expand or make it smaller. + Include as many of the tables as possible by using minimap controls in the lower-right corner of the model designer. You can also click and drag tables to different locations, bringing tables closer together, or putting them in a particular order. Moving tables does not affect the relationships already between the tables. To view all the columns in a particular table, click and drag on a table edge to expand or make it smaller. -2. Click the solid line between the **DimCustomer** table and the **DimGeography** table. The solid line between these two tables show this relationship is active, that is, it is used by default when calculating DAX formulas. +2. Click the solid line between the **DimCustomer** table and the **DimGeography** table. The solid line between these two tables shows this relationship is active, that is, it is used by default when calculating DAX formulas. - Notice the **GeographyKey** column in the **DimCustomer** table and the **GeographyKey** column in the **DimGeography** table now both each appear within a box. This shows these are the columns used in the relationship. The relationship’s properties now also appear in the **Properties** window. + Notice the **GeographyKey** column in the **DimCustomer** table and the **GeographyKey** column in the **DimGeography** table now both each appear within a box. These columns are used in the relationship. The relationship’s properties now also appear in the **Properties** window. > [!TIP] > In addition to using the model designer in diagram view, you can also use the Manage Relationships dialog box to show the relationships between all tables in a table format. In Tabular Model Explorer, right-click **Relationships** > **Manage Relationships**. -3. Use the model designer in diagram view, or the Manage Relationships dialog box, to verify the following relationships were created when each of the tables were imported from the AdventureWorksDW database: +3. Verify the following relationships were created when each of the tables were imported from the AdventureWorksDW database: |Active|Table|Related Lookup Table| |----------|---------|------------------------| @@ -58,26 +58,26 @@ When you imported data by using Get Data, you got seven tables from the Adventur |Yes|**FactInternetSales [CustomerKey]**|**DimCustomer [CustomerKey]**| |Yes|**FactInternetSales [ProductKey]**|**DimProduct [ProductKey]**| - If any of the relationships in the table above are missing, verify that your model includes the following tables: DimCustomer, DimDate, DimGeography, DimProduct, DimProductCategory, DimProductSubcategory, and FactInternetSales. If tables from the same data source connection are imported at separate times, any relationships between those tables will not be created and must be created manually. + If any of the relationships are missing, verify that your model includes the following tables: DimCustomer, DimDate, DimGeography, DimProduct, DimProductCategory, DimProductSubcategory, and FactInternetSales. If tables from the same data source connection are imported at separate times, any relationships between those tables are not be created and must be created manually. ### Take a closer look -In Diagram View, you'll notice an arrow, an asterisk, and a number on the lines that show the relationship between tables. +In Diagram View, notice an arrow, an asterisk, and a number on the lines that show the relationship between tables. ![aas-lesson4-line](../tutorials/media/aas-lesson4-line.png) -The arrow shows the filter direction, the asterisk shows this table is the many side in the relationship's cardinality, and the 1 shows this table is the one side of the relationship. If you need to edit a relationship; for example, change the relationship's filter direction or cardinality, double-click the relationship line to open the Edit Relationship dialog. +The arrow shows the filter direction. The asterisk shows this table is the many side in the relationship's cardinality, and the one shows this table is the one side of the relationship. If you need to edit a relationship; for example, change the relationship's filter direction or cardinality, double-click the relationship line to open the Edit Relationship dialog. ![aas-lesson4-edit](../tutorials/media/aas-lesson4-edit.png) -Most likely, you will never need to edit a relationship. These features are meant for advanced data modeling and are outside the scope of this tutorial. To learn more, see [Bi-directional cross filters for tabular models in Analysis Services](https://docs.microsoft.com/sql/analysis-services/tabular-models/bi-directional-cross-filters-tabular-models-analysis-services). +These features are meant for advanced data modeling and are outside the scope of this tutorial. To learn more, see [Bi-directional cross filters for tabular models in Analysis Services](https://docs.microsoft.com/sql/analysis-services/tabular-models/bi-directional-cross-filters-tabular-models-analysis-services). In some cases, you may need to create additional relationships between tables in your model to support certain business logic. For this tutorial, you need to create three additional relationships between the FactInternetSales table and the DimDate table. #### To add new relationships between tables -1. In the model designer, in the **FactInternetSales** table, click and hold on the **OrderDate** column, then drag the cursor to the **Date** column in the **DimDate** table, and then release. +1. In the model designer, in the **FactInternetSales** table, click, and hold on the **OrderDate** column, then drag the cursor to the **Date** column in the **DimDate** table, and then release. - A solid line appears showing you have created an active relationship between the **OrderDate** column in the **Internet Sales** table and the **Date** column in the **Date** table. + A solid line appears showing you have created an active relationship between the **OrderDate** column in the **Internet Sales** table, and the **Date** column in the **Date** table. ![aas-lesson4-new](../tutorials/media/aas-lesson4-new.png) @@ -86,9 +86,9 @@ In some cases, you may need to create additional relationships between tables in 2. In the **FactInternetSales** table, click and hold on the **DueDate** column, then drag the cursor to the **Date** column in the **DimDate** table, and then release. - A dotted line appears showing you have created an inactive relationship between the **DueDate** column in the **FactInternetSales** table and the **Date** column in the **DimDate** table. You can have multiple relationships between tables, but only one relationship can be active at a time. Inactive relationships can be made active to perform special aggregations in custom DAX expressions. + A dotted line appears showing you have created an inactive relationship between the **DueDate** column in the **FactInternetSales** table, and the **Date** column in the **DimDate** table. You can have multiple relationships between tables, but only one relationship can be active at a time. Inactive relationships can be made active to perform special aggregations in custom DAX expressions. -3. Finally, create one more relationship; in the **FactInternetSales** table, click and hold on the **ShipDate** column, then drag the cursor to the **Date** column in the **DimDate** table, and then release. +3. Finally, create one more relationship. In the **FactInternetSales** table, click and hold on the **ShipDate** column, then drag the cursor to the **Date** column in the **DimDate** table, and then release. ![aas-lesson4-newinactive](../tutorials/media/aas-lesson4-newinactive.png) diff --git a/articles/analysis-services/tutorials/aas-lesson-5-create-calculated-columns.md b/articles/analysis-services/tutorials/aas-lesson-5-create-calculated-columns.md index 9ff2ff591e8a2..3ed267f756527 100644 --- a/articles/analysis-services/tutorials/aas-lesson-5-create-calculated-columns.md +++ b/articles/analysis-services/tutorials/aas-lesson-5-create-calculated-columns.md @@ -11,21 +11,21 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na -ms.date: 05/26/2017 +ms.date: 06/01/2017 ms.author: owend --- # Lesson 5: Create calculated columns [!INCLUDE[analysis-services-appliesto-aas-sql2017-later](../../../includes/analysis-services-appliesto-aas-sql2017-later.md)] -In this lesson, you will create new data in your model by adding calculated columns. You can create calculated columns (as custom columns) when using Get Data, by using the Query Editor, or later in the model designer like you will do here. To learn more, see [Calculated columns](https://docs.microsoft.com/sql/analysis-services/tabular-models/ssas-calculated-columns). +In this lesson, you create data in your model by adding calculated columns. You can create calculated columns (as custom columns) when using Get Data, by using the Query Editor, or later in the model designer like you do here. To learn more, see [Calculated columns](https://docs.microsoft.com/sql/analysis-services/tabular-models/ssas-calculated-columns). -You will create five new calculated columns in three different tables. The steps are slightly different for each task. This is to show you there are several ways to create new columns, rename them, and place them in various locations in a table. +You create five new calculated columns in three different tables. The steps are slightly different for each task showing there are several ways to create columns, rename them, and place them in various locations in a table. -This is also where you will first use Data Analysis Expressions (DAX). DAX is a special language for creating highly customizable formula expressions for tabular models. In this tutorial, you will use DAX to create calculated columns, measures, and role filters. To learn more, see [DAX in tabular models](https://docs.microsoft.com/sql/analysis-services/tabular-models/understanding-dax-in-tabular-models-ssas-tabular). +This lesson is also where you first use Data Analysis Expressions (DAX). DAX is a special language for creating highly customizable formula expressions for tabular models. In this tutorial, you use DAX to create calculated columns, measures, and role filters. To learn more, see [DAX in tabular models](https://docs.microsoft.com/sql/analysis-services/tabular-models/understanding-dax-in-tabular-models-ssas-tabular). Estimated time to complete this lesson: **15 minutes** @@ -46,13 +46,13 @@ This topic is part of a tabular modeling tutorial, which should be completed in A new column named **Calculated Column 1** is inserted to the left of the **Calendar Quarter** column. -4. In the formula bar above the table, type the following DAX formula. AutoComplete helps you type the fully qualified names of columns and tables, and lists the functions that are available. +4. In the formula bar above the table, type the following DAX formula: AutoComplete helps you type the fully qualified names of columns and tables, and lists the functions that are available. ``` =RIGHT(" " & FORMAT([MonthNumberOfYear],"#0"), 2) & " - " & [EnglishMonthName] ``` - Values are then populated for all the rows in the calculated column. If you scroll down through the table, you will see that rows can have different values for this column, based on the data that is in each row. + Values are then populated for all the rows in the calculated column. If you scroll down through the table, you see rows can have different values for this column, based on the data in each row. 5. Rename this column to **MonthCalendar**. @@ -62,7 +62,7 @@ The MonthCalendar calculated column provides a sortable name for Month. #### Create a DayOfWeek calculated column in the DimDate table -1. With the **DimDate** table still active, click on the **Column** menu, and then click **Add Column**. +1. With the **DimDate** table still active, click the **Column** menu, and then click **Add Column**. 2. In the formula bar, type the following formula: @@ -74,7 +74,7 @@ The MonthCalendar calculated column provides a sortable name for Month. 3. Rename the column to **DayOfWeek**. -4. Click on the column heading, and then drag the column between the **EnglishDayNameOfWeek** column and the **DayNumberOfMonth** column. +4. Click the column heading, and then drag the column between the **EnglishDayNameOfWeek** column and the **DayNumberOfMonth** column. > [!TIP] > Moving columns in your table makes it easier to navigate. @@ -86,7 +86,7 @@ The DayOfWeek calculated column provides a sortable name for the day of week. 1. In the **DimProduct** table, scroll to the far right of the table. Notice the right-most column is named **Add Column** (italicized), click the column heading. -2. In the formula bar, type the following formula. +2. In the formula bar, type the following formula: ``` =RELATED('DimProductSubcategory'[EnglishProductSubcategoryName]) @@ -94,7 +94,7 @@ The DayOfWeek calculated column provides a sortable name for the day of week. 3. Rename the column to **ProductSubcategoryName**. -The ProductSubcategoryName calculated column is used to create a hierarchy in the DimProduct table which includes data from the EnglishProductSubcategoryName column in the DimProductSubcategory table. Hierarchies cannot span more than one table. You will create hierarchies later in Lesson 9. +The ProductSubcategoryName calculated column is used to create a hierarchy in the DimProduct table, which includes data from the EnglishProductSubcategoryName column in the DimProductSubcategory table. Hierarchies cannot span more than one table. You create hierarchies later in Lesson 9. #### Create a ProductCategoryName calculated column in the DimProduct table @@ -108,7 +108,7 @@ The ProductSubcategoryName calculated column is used to create a hierarchy in th 3. Rename the column to **ProductCategoryName**. -The ProductCategoryName calculated column is used to create a hierarchy in the DimProduct table which includes data from the EnglishProductCategoryName column in the DimProductCategory table. Hierarchies cannot span more than one table. +The ProductCategoryName calculated column is used to create a hierarchy in the DimProduct table, which includes data from the EnglishProductCategoryName column in the DimProductCategory table. Hierarchies cannot span more than one table. #### Create a Margin calculated column in the FactInternetSales table diff --git a/articles/analysis-services/tutorials/aas-lesson-6-create-measures.md b/articles/analysis-services/tutorials/aas-lesson-6-create-measures.md index a9f94b47d090d..095719e39ee13 100644 --- a/articles/analysis-services/tutorials/aas-lesson-6-create-measures.md +++ b/articles/analysis-services/tutorials/aas-lesson-6-create-measures.md @@ -11,23 +11,23 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na -ms.date: 05/26/2017 +ms.date: 06/01/2017 ms.author: owend --- # Lesson 6: Create measures [!INCLUDE[analysis-services-appliesto-aas-sql2017-later](../../../includes/analysis-services-appliesto-aas-sql2017-later.md)] -In this lesson, you create measures to be included in your model. Similar to the calculated columns you created, a measure is a calculation created by using a DAX formula. However, unlike calculated columns, measures are evaluated based on a user selected *filter*; for example, a particular column or slicer added to the Row Labels field in a PivotTable. A value for each cell in the filter is then calculated by the applied measure. Measures are powerful, flexible calculations that you will want to include in almost all tabular models to perform dynamic calculations on numerical data. To learn more, see [Measures](https://docs.microsoft.com/sql/analysis-services/tabular-models/measures-ssas-tabular). +In this lesson, you create measures to be included in your model. Similar to the calculated columns you created, a measure is a calculation created by using a DAX formula. However, unlike calculated columns, measures are evaluated based on a user selected *filter*. For example, a particular column or slicer added to the Row Labels field in a PivotTable. A value for each cell in the filter is then calculated by the applied measure. Measures are powerful, flexible calculations that you want to include in almost all tabular models to perform dynamic calculations on numerical data. To learn more, see [Measures](https://docs.microsoft.com/sql/analysis-services/tabular-models/measures-ssas-tabular). -To create measures, you use the *Measure Grid*. By default, each table has an empty measure grid; however, you typically will not create measures for every table. The measure grid appears below a table in the model designer when in Data View. To hide or show the measure grid for a table, click the **Table** menu, and then click **Show Measure Grid**. +To create measures, you use the *Measure Grid*. By default, each table has an empty measure grid; however, you typically do not create measures for every table. The measure grid appears below a table in the model designer when in Data View. To hide or show the measure grid for a table, click the **Table** menu, and then click **Show Measure Grid**. -You can create a measure by clicking on an empty cell in the measure grid, and then typing a DAX formula in the formula bar. When you click ENTER to complete the formula, the measure will then appear in the cell. You can also create measures using a standard aggregation function by clicking on a column, and then clicking on the AutoSum button (**∑**) on the toolbar. Measures created using the AutoSum feature will appear in the measure grid cell directly beneath the column, but can be moved. +You can create a measure by clicking an empty cell in the measure grid, and then typing a DAX formula in the formula bar. When you click ENTER to complete the formula, the measure then appears in the cell. You can also create measures using a standard aggregation function by clicking a column, and then clicking the AutoSum button (**∑**) on the toolbar. Measures created using the AutoSum feature appear in the measure grid cell directly beneath the column, but can be moved. -In this lesson, you create measures by both entering a DAX formula in the formula bar and by using the AutoSum feature. +In this lesson, you create measures by both entering a DAX formula in the formula bar, and by using the AutoSum feature. Estimated time to complete this lesson: **30 minutes** @@ -57,7 +57,7 @@ This topic is part of a tabular modeling tutorial, which should be completed in #### To create a DaysInCurrentQuarter measure in the DimDate table -1. With the **DimDate** table still active in the model designer, in the measure grid, click the empty cell below the measure you just created. +1. With the **DimDate** table still active in the model designer, in the measure grid, click the empty cell below the measure you created. 2. In the formula bar, type the following formula: @@ -65,13 +65,13 @@ This topic is part of a tabular modeling tutorial, which should be completed in DaysInCurrentQuarter:=COUNTROWS( DATESBETWEEN( 'DimDate'[Date], STARTOFQUARTER( LASTDATE('DimDate'[Date])), ENDOFQUARTER('DimDate'[Date]))) ``` - When creating a comparison ratio between one incomplete period and the previous period; the formula must take into account the proportion of the period that has elapsed, and compare it to the same proportion in the previous period. In this case, [DaysCurrentQuarterToDate]/[DaysInCurrentQuarter] gives the proportion elapsed in the current period. + When creating a comparison ratio between one incomplete period and the previous period. The formula must calculate the proportion of the period that has elapsed and compare it to the same proportion in the previous period. In this case, [DaysCurrentQuarterToDate]/[DaysInCurrentQuarter] gives the proportion elapsed in the current period. #### To create an InternetDistinctCountSalesOrder measure in the FactInternetSales table 1. Click the **FactInternetSales** table. -2. Click on the **SalesOrderNumber** column heading. +2. Click the **SalesOrderNumber** column heading. 3. On the toolbar, click the down-arrow next to the AutoSum (**∑**) button, and then select **DistinctCount**. @@ -97,7 +97,7 @@ This topic is part of a tabular modeling tutorial, which should be completed in |TaxAmt|InternetTotalTaxAmt|Sum|=SUM([TaxAmt])| |Freight|InternetTotalFreight|Sum|=SUM([Freight])| -2. By clicking on an empty cell in the measure grid, and by using the formula bar, create and name the following measures in order: +2. By clicking an empty cell in the measure grid, and by using the formula bar, create, and name the following measures in order: ``` InternetPreviousQuarterMargin:=CALCULATE([InternetTotalMargin],PREVIOUSQUARTER('DimDate'[Date])) diff --git a/articles/analysis-services/tutorials/aas-lesson-7-create-key-performance-indicators.md b/articles/analysis-services/tutorials/aas-lesson-7-create-key-performance-indicators.md index cd911b6c586cc..b73ec35190a4b 100644 --- a/articles/analysis-services/tutorials/aas-lesson-7-create-key-performance-indicators.md +++ b/articles/analysis-services/tutorials/aas-lesson-7-create-key-performance-indicators.md @@ -11,7 +11,7 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na ms.date: 05/26/2017 @@ -21,7 +21,7 @@ ms.author: owend [!INCLUDE[analysis-services-appliesto-aas-sql2017-later](../../../includes/analysis-services-appliesto-aas-sql2017-later.md)] -In this lesson, you create Key Performance Indicators (KPIs). KPIs are used to gauge performance of a value, defined by a *Base* measure, against a *Target* value, also defined by a measure or by an absolute value. In reporting client applications, KPIs can provide business professionals a quick and easy way to understand a summary of business success or to identify trends. To learn more, see [KPIs](https://docs.microsoft.com/sql/analysis-services/tabular-models/kpis-ssas-tabular) +In this lesson, you create Key Performance Indicators (KPIs). KPIs are used to gauge performance of a value defined by a *Base* measure, against a *Target* value also defined by a measure, or by an absolute value. In reporting client applications, KPIs can provide business professionals a quick and easy way to understand a summary of business success or to identify trends. To learn more, see [KPIs](https://docs.microsoft.com/sql/analysis-services/tabular-models/kpis-ssas-tabular) Estimated time to complete this lesson: **15 minutes** @@ -42,7 +42,7 @@ This topic is part of a tabular modeling tutorial, which should be completed in InternetCurrentQuarterSalesPerformance :=DIVIDE([InternetCurrentQuarterSales]/[InternetPreviousQuarterSalesProportionToQTD],BLANK()) ``` - This measure will serve as the Base measure for the KPI. + This measure serves as the Base measure for the KPI. 4. Right-click **InternetCurrentQuarterSalesPerformance** > **Create KPI**. @@ -55,7 +55,7 @@ This topic is part of a tabular modeling tutorial, which should be completed in ![aas-lesson7-kpi](../tutorials/media/aas-lesson7-kpi.png) > [!TIP] - > Notice the expandable **Descriptions** label below the available icon styles. Use this to enter descriptions for the various KPI elements to make them more identifiable in client applications. + > Notice the expandable **Descriptions** label below the available icon styles. Use descriptions for the various KPI elements to make them more identifiable in client applications. 9. Click **OK** to complete the KPI. diff --git a/articles/analysis-services/tutorials/aas-lesson-8-create-perspectives.md b/articles/analysis-services/tutorials/aas-lesson-8-create-perspectives.md index 26b395a5f38d2..974e5e556a8be 100644 --- a/articles/analysis-services/tutorials/aas-lesson-8-create-perspectives.md +++ b/articles/analysis-services/tutorials/aas-lesson-8-create-perspectives.md @@ -11,7 +11,7 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na ms.date: 05/26/2017 @@ -23,11 +23,11 @@ ms.author: owend In this lesson, you create an Internet Sales perspective. A perspective defines a viewable subset of a model that provides focused, business-specific, or application-specific viewpoints. When a user connects to a model by using a perspective, they see only those model objects (tables, columns, measures, hierarchies, and KPIs) as fields defined in that perspective. To learn more, see [Perspectives](https://docs.microsoft.com/sql/analysis-services/tabular-models/perspectives-ssas-tabular). -The Internet Sales perspective you create in this lesson will exclude the DimCustomer table object. When you create a perspective that excludes certain objects from view, that object still exists in the model; however, it is not visible in a reporting client field list. Calculated columns and measures either included in a perspective or not can still calculate from object data that is excluded. +The Internet Sales perspective you create in this lesson excludes the DimCustomer table object. When you create a perspective that excludes certain objects from view, that object still exists in the model. However, it is not visible in a reporting client field list. Calculated columns and measures either included in a perspective or not can still calculate from object data that is excluded. The purpose of this lesson is to describe how to create perspectives and become familiar with the tabular model authoring tools. If you later expand this model to include additional tables, you can create additional perspectives to define different viewpoints of the model, for example, Inventory and Sales. -Estimated time to complete this lesson: **5 minutes** +Estimated time to complete this lesson: **Five minutes** ## Prerequisites This topic is part of a tabular modeling tutorial, which should be completed in order. Before performing the tasks in this lesson, you should have completed the previous lesson: [Lesson 7: Create Key Performance Indicators](../tutorials/aas-lesson-7-create-key-performance-indicators.md). @@ -42,11 +42,11 @@ This topic is part of a tabular modeling tutorial, which should be completed in 3. Double-click the **New Perspective** column heading, and then rename **Internet Sales**. -4. Select the all of the tables *except* **DimCustomer**. +4. Select the all the tables *except* **DimCustomer**. ![aas-lesson8-perspectives](../tutorials/media/aas-lesson8-perspectives.png) - In a later lesson, you will use the Analyze in Excel feature to test this perspective. The Excel PivotTable Fields List will include each table except the DimCustomer table. + In a later lesson, you use the Analyze in Excel feature to test this perspective. The Excel PivotTable Fields List includes each table except the DimCustomer table. ## What's next? [Lesson 9: Create hierarchies](../tutorials/aas-lesson-9-create-hierarchies.md). diff --git a/articles/analysis-services/tutorials/aas-lesson-9-create-hierarchies.md b/articles/analysis-services/tutorials/aas-lesson-9-create-hierarchies.md index ff7dc43313533..e26c894bfbb2e 100644 --- a/articles/analysis-services/tutorials/aas-lesson-9-create-hierarchies.md +++ b/articles/analysis-services/tutorials/aas-lesson-9-create-hierarchies.md @@ -11,7 +11,7 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na ms.date: 05/26/2017 @@ -21,7 +21,7 @@ ms.author: owend [!INCLUDE[analysis-services-appliesto-aas-sql2017-later](../../../includes/analysis-services-appliesto-aas-sql2017-later.md)] -In this lesson, you create hierarchies. Hierarchies are groups of columns arranged in levels; for example, a Geography hierarchy might have sub-levels for Country, State, County, and City. Hierarchies can appear separate from other columns in a reporting client application field list, making them easier for client users to navigate and include in a report. To learn more, see [Hierarchies](https://docs.microsoft.com/sql/analysis-services/tabular-models/hierarchies-ssas-tabular) +In this lesson, you create hierarchies. Hierarchies are groups of columns arranged in levels; for example, a Geography hierarchy might have sublevels for Country, State, County, and City. Hierarchies can appear separate from other columns in a reporting client application field list, making them easier for client users to navigate and include in a report. To learn more, see [Hierarchies](https://docs.microsoft.com/sql/analysis-services/tabular-models/hierarchies-ssas-tabular) To create hierarchies, use the model designer in *Diagram View*. Creating and managing hierarchies is not supported in Data View. @@ -53,7 +53,7 @@ This topic is part of a tabular modeling tutorial, which should be completed in #### To create hierarchies in the DimDate table -1. In the **DimDate** table, create a new hierarchy named **Calendar**. +1. In the **DimDate** table, create a hierarchy named **Calendar**. 3. Add the following columns in-order: diff --git a/articles/analysis-services/tutorials/aas-supplemental-lesson-detail-rows.md b/articles/analysis-services/tutorials/aas-supplemental-lesson-detail-rows.md index 6f201a352982a..5f8e63d58f6ac 100644 --- a/articles/analysis-services/tutorials/aas-supplemental-lesson-detail-rows.md +++ b/articles/analysis-services/tutorials/aas-supplemental-lesson-detail-rows.md @@ -11,7 +11,7 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na ms.date: 05/26/2017 @@ -37,7 +37,7 @@ Let's look at the details of our InternetTotalSales measure, before adding a Det ![aas-lesson-detail-rows-pivottable](../tutorials/media/aas-lesson-detail-rows-pivottable.png) -3. In the PivotTable, double-click an aggregated value for a year and a region name. Here we double-clicked on the value for Australia and the year 2014. A new sheet opens containing a lot of data, but not really useful. +3. In the PivotTable, double-click an aggregated value for a year and a region name. Here we double-clicked the value for Australia and the year 2014. A new sheet opens containing data, but not useful data. ![aas-lesson-detail-rows-pivottable](../tutorials/media/aas-lesson-detail-rows-sheet.png) diff --git a/articles/analysis-services/tutorials/aas-supplemental-lesson-dynamic-security.md b/articles/analysis-services/tutorials/aas-supplemental-lesson-dynamic-security.md index 03c0cbcfeb006..66c1b7f12707b 100644 --- a/articles/analysis-services/tutorials/aas-supplemental-lesson-dynamic-security.md +++ b/articles/analysis-services/tutorials/aas-supplemental-lesson-dynamic-security.md @@ -11,7 +11,7 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na ms.date: 05/26/2017 @@ -25,9 +25,9 @@ In this supplemental lesson, you create an additional role that implements dynam To implement dynamic security, you add a table to your model containing the user names of those users that can connect to the model and browse model objects and data. The model you create using this tutorial is in the context of Adventure Works; however, to complete this lesson, you must add a table containing users from your own domain. You do not need the passwords for the user names that are added. To create an EmployeeSecurity table, with a small sample of users from your own domain, you use the Paste feature, pasting employee data from an Excel spreadsheet. In a real-world scenario, the table containing user names would typically be a table from an actual database as a data source; for example, a real DimEmployee table. -To implement dynamic security, you use two DAX functions: [USERNAME Function (DAX)](http://msdn.microsoft.com/22dddc4b-1648-4c89-8c93-f1151162b93f) and [LOOKUPVALUE Function (DAX)](http://msdn.microsoft.com/73a51c4d-131c-4c33-a139-b1342d10caab). These functions, applied in a row filter formula, are defined in a new role. Using the LOOKUPVALUE function, the formula specifies a value from the EmployeeSecurity table and then passes that value to the USERNAME function, which specifies the user name of the user logged on belongs to this role. The user can then browse only data specified by the role’s row filters. In this scenario, you specify that sales employees can only browse Internet sales data for the sales territories in which they are a member. +To implement dynamic security, you use two DAX functions: [USERNAME Function (DAX)](http://msdn.microsoft.com/22dddc4b-1648-4c89-8c93-f1151162b93f) and [LOOKUPVALUE Function (DAX)](http://msdn.microsoft.com/73a51c4d-131c-4c33-a139-b1342d10caab). These functions, applied in a row filter formula, are defined in a new role. By using the LOOKUPVALUE function, the formula specifies a value from the EmployeeSecurity table. The formula then passes that value to the USERNAME function, which specifies the user name of the user logged on belongs to this role. The user can then browse only data specified by the role’s row filters. In this scenario, you specify that sales employees can only browse Internet sales data for the sales territories in which they are a member. -For this supplemental lesson, you complete a series of tasks. Those tasks that are unique to this Adventure Works tabular model scenario, but would not necessarily apply to a real-world scenario are identified as such. Each task includes additional information describing the purpose of the task. +Those tasks that are unique to this Adventure Works tabular model scenario, but would not necessarily apply to a real-world scenario are identified as such. Each task includes additional information describing the purpose of the task. Estimated time to complete this lesson: **30 minutes** @@ -51,14 +51,14 @@ To implement dynamic security for this Adventure Works scenario, you must add tw The new table is added to the model workspace. Objects and data from the source DimSalesTerritory table are then imported into your AW Internet Sales Tabular Model. -9. After the table has been imported successfuly, click **Close**. +9. After the table has been imported successfully, click **Close**. ## Add a table with user name data -Because the DimEmployee table in the AdventureWorksDW sample database contains users from the AdventureWorks domain, and those user names do not exist in your own environment, you must create a table in your model that contains a small sample (three) of actual users from your organization. You then add these users as members to the new role. You do not need the passwords for the sample user names, but you do need actual Windows user names from your own domain. +The DimEmployee table in the AdventureWorksDW sample database contains users from the AdventureWorks domain. Those user names do not exist in your own environment. You must create a table in your model that contains a small sample (at least three) of actual users from your organization. You then add these users as members to the new role. You do not need the passwords for the sample user names, but you do need actual Windows user names from your own domain. #### To add an EmployeeSecurity table -1. Open Microsoft Excel, creating a new worksheet. +1. Open Microsoft Excel, creating a worksheet. 2. Copy the following table, including the header row, and then paste it into the worksheet. @@ -71,7 +71,7 @@ Because the DimEmployee table in the AdventureWorksDW sample database contains u |3|5|||\| ``` -3. Replace the first name, last name, and domain\username with the names and login ids of three users in your organization. Put the same user on the first two rows, for EmployeeId 1. This shows this user belongs to more than one sales territory. Leave the EmployeeId and SalesTerritoryId fields as they are. +3. Replace the first name, last name, and domain\username with the names and login ids of three users in your organization. Put the same user on the first two rows, for EmployeeId 1, showing this user belongs to more than one sales territory. Leave the EmployeeId and SalesTerritoryId fields as they are. 4. Save the worksheet as **SampleEmployee**. @@ -94,21 +94,21 @@ The FactInternetSales, DimGeography, and DimSalesTerritory table all contain a c #### To create relationships between the FactInternetSales, DimGeography, and the DimSalesTerritory table -1. In the model designer, in Diagram View, in the **DimGeography** table, click, and hold on the **SalesTerritoryId** column, then drag the cursor to the **SalesTerritoryId** column in the **DimSalesTerritory** table, and then release. +1. In Diagram View, in the **DimGeography** table, click, and hold on the **SalesTerritoryId** column, then drag the cursor to the **SalesTerritoryId** column in the **DimSalesTerritory** table, and then release. 2. In the **FactInternetSales** table, click, and hold on the **SalesTerritoryId** column, then drag the cursor to the **SalesTerritoryId** column in the **DimSalesTerritory** table, and then release. - Notice the Active property for this relationship is False, meaning it's inactive; this is because the FactInternetSales table already has another active relationship. + Notice the Active property for this relationship is False, meaning it's inactive. The FactInternetSales table already has another active relationship. ## Hide the EmployeeSecurity Table from client applications -In this task, you hide the EmployeeSecurity table, keeping it from appearing in a client application’s field list. Keep in-mind that hiding a table does not secure it. Users can still query EmployeeSecurity table data, if they know how. To secure the EmployeeSecurity table data, preventing users from being able to query any of its data, you apply a filter in a later task. +In this task, you hide the EmployeeSecurity table, keeping it from appearing in a client application’s field list. Keep in-mind that hiding a table does not secure it. Users can still query EmployeeSecurity table data if they know how. To secure the EmployeeSecurity table data, preventing users from being able to query any of its data, you apply a filter in a later task. #### To hide the EmployeeSecurity table from client applications - In the model designer, in Diagram View, right-click the **Employee** table heading, and then click **Hide from Client Tools**. ## Create a Sales Employees by Territory user role -In this task, you create a new user role. This role includes a row filter defining which rows of the DimSalesTerritory table are visible to users. The filter is then applied in the one-to-many relationship direction to all other tables related to DimSalesTerritory. You also apply a filter that secures the entire EmployeeSecurity table from being queryable by any user that is a member of the role. +In this task, you create a user role. This role includes a row filter defining which rows of the DimSalesTerritory table are visible to users. The filter is then applied in the one-to-many relationship direction to all other tables related to DimSalesTerritory. You also apply a filter that secures the entire EmployeeSecurity table from being queryable by any user that is a member of the role. > [!NOTE] > The Sales Employees by Territory role you create in this lesson restricts members to browse (or query) only sales data for the sales territory to which they belong. If you add a user as a member to the Sales Employees by Territory role that also exists as a member in a role created in [Lesson 11: Create Roles](../tutorials/aas-lesson-11-create-roles.md), you get a combination of permissions. When a user is a member of multiple roles, the permissions, and row filters defined for each role are cumulative. That is, the user has the greater permissions determined by the combination of roles. @@ -139,7 +139,7 @@ In this task, you create a new user role. This role includes a row filter defini =FALSE() ``` - This formula specifies that all columns resolve to the false Boolean condition; therefore, no columns for the EmployeeSecurity table can be queried by a member of the Sales Employees by Territory user role. + This formula specifies that all columns resolve to the false Boolean condition. No columns for the EmployeeSecurity table can be queried by a member of the Sales Employees by Territory user role. 9. For the **DimSalesTerritory** table, type the following formula: @@ -165,21 +165,21 @@ In this task, you use the Analyze in Excel feature in SSDT to test the efficacy 2. In the **Analyze in Excel** dialog box, in **Specify the user name or role to use to connect to the model**, select **Other Windows User**, and then click **Browse**. -3. In the **Select User or Group** dialog box, in **Enter the object name to select**, type one of the user names you included in the EmployeeSecurity table, and then click **Check Names**. +3. In the **Select User or Group** dialog box, in **Enter the object name to select**, type a user name you included in the EmployeeSecurity table, and then click **Check Names**. 4. Click **Ok** to close the **Select User or Group** dialog box, and then click **Ok** to close the **Analyze in Excel** dialog box. Excel opens with a new workbook. A PivotTable is automatically created. The PivotTable Fields list includes most of the data fields available in your new model. - Notice the EmployeeSecurity table is not visible in the PivotTable Fields list; this is because you hid this table from client tools in a previous task. + Notice the EmployeeSecurity table is not visible in the PivotTable Fields list. You hid this table from client tools in a previous task. 5. In the **Fields** list, in **∑ Internet Sales** (measures), select the **InternetTotalSales** measure. The measure is entered into the **Values** fields. 6. Select the **SalesTerritoryId** column from the **DimSalesTerritory** table. The column is entered into the **Row Labels** fields. - Notice Internet sales figures appear only for the one region to which the effective user name you used belongs. If you select another column; for example, City, from the DimGeography table as Row Label field, only cities in the sales territory to which the effective user belongs are displayed. + Notice Internet sales figures appear only for the one region to which the effective user name you used belongs. If you select another column, like City from the DimGeography table as Row Label field, only cities in the sales territory to which the effective user belongs are displayed. - This user cannot browse or query any Internet sales data for territories other than the one they belong to. This restriction is because the row filter defined for the DimSalesTerritory table, in the Sales Employees by Territory user role, effectively secures data for all data related to other sales territories. + This user cannot browse or query any Internet sales data for territories other than the one they belong to. This restriction is because the row filter defined for the DimSalesTerritory table, in the Sales Employees by Territory user role, secures data for all data related to other sales territories. ## See Also [USERNAME Function (DAX)](https://msdn.microsoft.com/library/hh230954.aspx) diff --git a/articles/analysis-services/tutorials/aas-supplemental-lesson-ragged-hierarchies.md b/articles/analysis-services/tutorials/aas-supplemental-lesson-ragged-hierarchies.md index 5123043ab895d..928987012177c 100644 --- a/articles/analysis-services/tutorials/aas-supplemental-lesson-ragged-hierarchies.md +++ b/articles/analysis-services/tutorials/aas-supplemental-lesson-ragged-hierarchies.md @@ -11,7 +11,7 @@ tags: '' ms.assetid: ms.service: analysis-services ms.devlang: NA -ms.topic: article +ms.topic: get-started-article ms.tgt_pltfrm: NA ms.workload: na ms.date: 05/26/2017 diff --git a/articles/app-service-web/app-service-web-tutorial-java-mysql.md b/articles/app-service-web/app-service-web-tutorial-java-mysql.md index 1db996176c2f3..b8355d1fbca2f 100644 --- a/articles/app-service-web/app-service-web-tutorial-java-mysql.md +++ b/articles/app-service-web/app-service-web-tutorial-java-mysql.md @@ -15,49 +15,56 @@ ms.topic: article ms.date: 05/22/2017 ms.author: bbenz --- + # Build a Java and MySQL web app in Azure -This tutorial shows you how to create a Java web app in Azure that connects to a MySQL database. -The first step is to clone an application to your local machine, and have it work with a local MySQL instance. -The next step is to set up Azure services for the Java app and MySQL, then deploy the application to an Azure appservice. -When you are finished, you will have a to-do list application running on Azure and connecting to the Azure MySQL database service. + +This tutorial shows you how to create a Java web app in Azure and connect it to a MySQL database. +When you are finished, you will have a [Spring Boot](https://projects.spring.io/spring-boot/) application storing data in [Azure Database for MySQL](https://docs.microsoft.com/azure/mysql/overview) running on [Azure App Service Web Apps](https://docs.microsoft.com/azure/app-service-web/app-service-web-overview). ![Java app running in Azure appservice](./media/app-service-web-tutorial-java-mysql/appservice-web-app.png) -## Before you begin -Before running this sample, install the following prerequisites locally: +In this tutorial, you learn how to: + +> [!div class="checklist"] +> * Create a MySQL database in Azure +> * Connect a sample app to the database +> * Deploy the app to Azure +> * Update and redeploy the app +> * Stream diagnostic logs from Azure +> * Monitor the app in the Azure portal + -1. [Download and install git](https://git-scm.com/) -1. [Download and install Java 7 or above](http://Java.net/downloads.Java) -1. [Download and install Maven](https://maven.apache.org/download.cgi) +## Prerequisites + +1. [Download and install Git](https://git-scm.com/) +1. [Download and install the Java 7 JDK or above](http://www.oracle.com/technetwork/java/javase/downloads/index.html) 1. [Download, install, and start MySQL](https://dev.mysql.com/doc/refman/5.7/en/installing.html) -1. [Download and install the Azure CLI 2.0](https://docs.microsoft.com/cli/azure/install-azure-cli) +1. [Install the Azure CLI 2.0](https://docs.microsoft.com/cli/azure/install-azure-cli) [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -## Prepare local MySQL database +## Prepare local MySQL -In this step, you create a database in a local MySQL server for your use in this tutorial. +In this step, you create a database in a local MySQL server for use in testing the app locally on your machine. ### Connect to MySQL server + Connect to your local MySQL server from the command line: ```bash mysql -u root -p ``` -If your command runs successfully, then your MySQL server is already running. If not, make sure that your local MySQL server is started by following the [MySQL post-installation steps](https://dev.mysql.com/doc/refman/5.7/en/postinstallation.html). - If you're prompted for a password, enter the password for the `root` account. If you don't remember your root account password, see [MySQL: How to Reset the Root Password](https://dev.mysql.com/doc/refman/5.7/en/resetting-permissions.html). +If your command runs successfully, then your MySQL server is already running. If not, make sure that your local MySQL server is started by following the [MySQL post-installation steps](https://dev.mysql.com/doc/refman/5.7/en/postinstallation.html). -### Create a database and table +### Create a database In the `mysql` prompt, create a database and a table for the to-do items. ```sql -CREATE DATABASE todoItemDb; -USE todoItemDb; -CREATE TABLE ITEMS ( id varchar(255), name varchar(255), category varchar(255), complete bool); +CREATE DATABASE tododb; ``` Exit your server connection by typing `quit`. @@ -66,90 +73,77 @@ Exit your server connection by typing `quit`. quit ``` -## Create local Java application -In this step, you clone a GitHub repo, configure the MySQL database connection, and run the app locally. +## Create and run the sample app -### Clone the sample +In this step, you clone sample Spring boot app, configure it to use the local MySQL database, and run it on your computer. -From the command prompt, navigate to a working directory. +### Clone the sample -Run the following commands to clone the sample repository. +From the command prompt, navigate to a working directory and clone the sample repository. ```bash -git clone https://github.com/bbenz/azure-mysql-java-todo-app +git clone https://github.com/Azure-Samples/spring-boot-appservice-mysql.git ``` -Next, set up lombok.jar by following the steps in the repo's readme. - - -### Configure MySQL connection +### Configure the app to use the MySQL database -This application uses the Maven Jetty plugin to run the application locally and connect to the MySQL database. -To enable access to the local MySQL instance, Set your local MySQL user ID and password in WebContent/WEB-INF/jetty-env.xml. - -Update the User and Password values with your local MySQL instance's user ID and password: +Update the `spring.datasource.password` and value in *spring-boot-mysql-todo/src/main/resources/application.properties* with the same root password used to open the MySQL command prompt: ``` - - - - jdbc/todoItemDb - - - jdbc:mysql://localhost:3306/itemdb - root - - - - - - +spring.datasource.password=mysqlpass ``` -> [!NOTE] -> For information on how Jetty uses the `jetty-env.xml` file, see the [Jetty XML Reference](http://www.eclipse.org/jetty/documentation/9.4.x/jetty-env-xml.html). - -### Run the sample +### Build and run the sample -Use a Maven command to run the sample: +Build and run the sample using the Maven wrapper included in the repo: ```bash -mvn package jetty:run +cd spring-boot-mysql-todo +mvnw package spring-boot:run +``` + +Open your browser to http://localhost:8080 to see in the sample in action. As you add tasks to the list, use the following SQL commands in the MySQL command prompt to view the data stored in MySQL. + +```SQL +use testdb; +select * from todo_item; ``` -Next, navigate to `http://localhost:8080` in a browser. Add a few tasks in the page. +Stop the application by hitting `Ctrl`+`C` in the command prompt. -To stop the application at any time, type `Ctrl`+`C` at the command prompt. +## Create an Azure MySQL database -## Create an Azure Database for MySQL -In this step, you create an [Azure Database for MySQL](../mysql/quickstart-create-mysql-server-database-using-azure-cli.md). Later, you will configure your Java application to connect to this database. +In this step, you create an [Azure Database for MySQL](../mysql/quickstart-create-mysql-server-database-using-azure-cli.md) instance using the [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli). You configure the sample application to use this database later on in the tutorial. -### Log in to Azure Use the Azure CLI 2.0 in a terminal window to create the resources needed to host your Java application in Azure appservice. Log in to your Azure subscription with the [az login](/cli/azure/#login) command and follow the on-screen directions. ```azurecli az login -``` +``` ### Create a resource group -Create a [resource group](../azure-resource-manager/resource-group-overview.md) with the [az group create](/cli/azure/group#create) command. An Azure resource group is a logical container into which Azure resources like web apps, databases, and storage accounts are deployed and managed. + +Create a [resource group](../azure-resource-manager/resource-group-overview.md) with the [az group create](/cli/azure/group#create) command. An Azure resource group is a logical container where related resources like web apps, databases, and storage accounts are deployed and managed. The following example creates a resource group in the North Europe region: ```azurecli -az group create --name myResourceGroup --location "North Europe" -``` - -To available value for `--location`, use the [az appservice list-locations](/cli/azure/appservice#list-locations) command. +az group create --name myResourceGroup --location "North Europe" +``` -### Create the server +To see the possible values you can use for `--location`, use the [az appservice list-locations](/cli/azure/appservice#list-locations) command. -Create a server in Azure Database for MySQL (Preview) with the [az mysql server create](/cli/azure/mysql/server#create) command. +### Create a MySQL server +Create a server in Azure Database for MySQL (Preview) with the [az mysql server create](/cli/azure/mysql/server#create) command. Substitute your own unique MySQL server name where you see the `` placeholder. This name is part of your MySQL server's hostname, `.mysql.database.azure.com`, so it needs to be globally unique. Also substitute `` and `` with your own values. ```azurecli -az mysql server create --name --resource-group myResourceGroup --location "North Europe" --user --password +az mysql server create --name \ +--resource-group myResourceGroup \ +--location "North Europe" \ +--admin-user \ +--admin-password ``` When the MySQL server is created, the Azure CLI shows information similar to the following example: @@ -162,47 +156,51 @@ When the MySQL server is created, the Azure CLI shows information similar to the "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.DBforMySQL/servers/mysql_server_name", "location": "northeurope", "name": "mysql_server_name", - "resourceGroup": "myResourceGroup", + "resourceGroup": "mysqlJavaResourceGroup", ... + < Output has been truncated for readability > } ``` -### Configure a server firewall +### Configure server firewall Create a firewall rule for your MySQL server to allow client connections by using the [az mysql server firewall-rule create](/cli/azure/mysql/server/firewall-rule#create) command. ```azurecli -az mysql server firewall-rule create --name allIPs --server mysql_server_name --resource-group myResourceGroup --start-ip-address 0.0.0.0 --end-ip-address 255.255.255.255 +az mysql server firewall-rule create \ +--name allIPs \ +--server \ +--resource-group myResourceGroup \ +--start-ip-address 0.0.0.0 \ +--end-ip-address 255.255.255.255 ``` > [!NOTE] -> Azure Database for MySQL (Preview) does not presently enable connections from Azure services. As IP addresses in Azure are dynamically assigned, it is better to enable all IP addresses for now. As the service is in preview, better methods for securing your database will be enabled soon. +> Azure Database for MySQL (Preview) does not currently automatically enable connections from Azure services. As IP addresses in Azure are dynamically assigned, it is better to enable all IP addresses for now. As the service continues its preview, better methods for securing your database will be enabled. -### Connect to the MySQL server +## Configure the Azure MySQL database -In the terminal window, connect to the MySQL server in Azure. Use the value you specified previously for `` and ``. +In the terminal window on your computer, connect to the MySQL server in Azure. Use the value you specified previously for `` and ``. ```bash mysql -u @ -h .mysql.database.azure.com -P 3306 -p ``` -### Create a database and table in the Azure MySQL Service +### Create a database In the `mysql` prompt, create a database and a table for the to-do items. ```sql -CREATE DATABASE todoItemDb; -USE todoItemDb; -CREATE TABLE ITEMS ( id varchar(255), name varchar(255), category varchar(255), complete bool); +CREATE DATABASE tododb; ``` ### Create a user with permissions -Create a database user and give it all privileges in the `todoItemDb` database. Replace the placeholders `` and `` with your own unique app name. +Create a database user and give it all privileges in the `tododb` database. Replace the placeholders `` and `` with your own unique app name. ```sql CREATE USER '' IDENTIFIED BY ''; -GRANT ALL PRIVILEGES ON todoItemDb.* TO ''; +GRANT ALL PRIVILEGES ON tododb.* TO ''; ``` Exit your server connection by typing `quit`. @@ -211,83 +209,20 @@ Exit your server connection by typing `quit`. quit ``` -### Configure the local MySQL connection with the new Azure Database for MySQL service -In this step, you connect your Java application to the MySQL database you created in Azure Database for MySQL. - -To enable access from the local application to the Azure MySQL service, Set your new MySQL endpoint, user ID, and password in WebContent/WEB-INF/jetty-env.xml: - -``` - - - - jdbc/todoItemDb - - - jdbc:mysql:.mysql.database.azure.com/itemdb - Javaapp_user@mysql_server_name - Azure MySQL Password - - - - -``` - -Save your changes. - -## Test the application - -Use the same maven command as before to run the sample locally again, but this time connecting to the Azure Database for MySQL service: - -```bash -mvn package jetty:run -``` - -Navigate to `http://localhost:8080` in a browser. If the page loads without errors, then your Java application is connecting to the MySQL database in Azure. - -You should not have Add a few tasks in the page. - -To stop the application at any time, type `Ctrl`+`C` in the terminal. - -### Secure sensitive data - -Make sure that the sensitive data in `WebContent/WEB-INF/jetty-env.xml` is not committed into Git. - -To do this, open `.gitignore` from the repository root and add `WebContent/WEB-INF/jetty-env.xml` in a new line. Save your changes. - -Commit your changes to `.gitignore`. - -```bash -git add .gitignore -git commit -m "keep sensitive data in WebContent/WEB-INF/jetty-env.xml out of git" -``` - -## Deploy the Java application to Azure -Next we deploy the Java application to an Azure appservice. - -### Create an appservice plan - -Create an appservice plan with the [az appservice plan create](/cli/azure/appservice/plan#create) command. - -> [!NOTE] -> An appservice plan represents the collection of physical resources used to host your apps. All applications assigned to an appservice plan share the resources defined by it allowing you to save cost when hosting multiple apps. -> -> appservice plans define: -> -> * Region (North Europe, East US, Southeast Asia) -> * Instance Size (Small, Medium, Large) -> * Scale Count (one, two, or three instances, etc.) -> * SKU (Free, Shared, Basic, Standard, Premium) - +## Deploy the sample to Azure App Service -The following example creates an appservice plan named `myAppServicePlan` using the **FREE** pricing tier: +Create an Azure App Service plan with the **FREE** pricing tier using the [az appservice plan create](/cli/azure/appservice/plan#create) CLI command. The appservice plan defines the physical resources used to host your apps. All applications assigned to an appservice plan share these resources, allowing you to save cost when hosting multiple apps. ```azurecli -az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku FREE +az appservice plan create \ +--name myAppServicePlan \ +--resource-group myResourceGroup \ +--sku FREE ``` -When the appservice plan is created, the Azure CLI shows information similar to the following example: +When the plan is ready, the Azure CLI shows similar output to the following example: -```json +```json { "adminSiteName": null, "appServicePlanName": "myAppServicePlan", @@ -298,23 +233,25 @@ When the appservice plan is created, the Azure CLI shows information similar to "location": "North Europe", "maximumNumberOfWorkers": 1, "name": "myAppServicePlan", - - "targetWorkerSizeId": 0, - "type": "Microsoft.Web/serverfarms", - "workerTierName": null + ... + < Output has been truncated for readability > } ``` ### Create an Azure Web app -Now that an appservice plan has been created, create an Azure Web app within the `myAppServicePlan` appservice plan. The web app gives you a hosting space to deploy your code and provides a URL for you to view the deployed application. Use the [az appservice web create](/cli/azure/appservice/web#create) command to create the web app. -In the following command, substitute the `` placeholder with your own unique app name. This unique name will be used as the part of the default domain name for the web app, so the name needs to be unique across all apps in Azure. You can later map any custom DNS entry to the web app before you expose it to your users. + Use the [az webapp create](/cli/azure/appservice/web#create) CLI command to create a web app definition in the `myAppServicePlan` App Service plan. The web app definition provides a URL to access your application with and configures several options to deploy your code to Azure. ```azurecli -az appservice web create --name --resource-group myResourceGroup --plan myAppServicePlan +az webapp create \ +--name \ +--resource-group myResourceGroup \ +--plan myAppServicePlan ``` -When the web app has been created, the Azure CLI shows information similar to the following example: +Substitute the `` placeholder with your own unique app name. This unique name is part of the default domain name for the web app, so the name needs to be unique across all apps in Azure. You can map a custom domain name entry to the web app before you expose it to your users. + +When the web app definition is ready, the Azure CLI shows information similar to the following example: ```json { @@ -326,89 +263,195 @@ When the web app has been created, the Azure CLI shows information similar to th "dailyMemoryTimeQuota": 0, "defaultHostName": ".azurewebsites.net", "enabled": true, - "enabledHostNames": [ - ".azurewebsites.net", - ".scm.azurewebsites.net" - ], - "gatewaySiteName": null, - "hostNameSslStates": [ - { - "hostType": "Standard", - "name": ".azurewebsites.net", - "sslState": "Disabled", - "thumbprint": null, - "toUpdate": null, - "virtualIp": null - } - + ... + < Output has been truncated for readability > } ``` -### Set the Java version, the Java Application Server type, and the Application Server version -Set the Java version, Java App Server (container), and container version by using the [az appservice web config update](/cli/azure/appservice/web/config#update) command. +### Configure Java + +Set up the Java runtime configuration that your app needs with the [az appservice web config update](/cli/azure/appservice/web/config#update) command. + +The following command configures the web app to run on a recent Java 8 JDK and [Apache Tomcat](http://tomcat.apache.org/) 8.0. + +```azurecli +az webapp config set \ +--name \ +--resource-group myResourceGroup \ +--java-version 1.8 \ +--java-container Tomcat \ +--java-container-version 8.0 +``` + +### Configure the app to use the Azure SQL database + +Before running the sample app, set application settings on the web app to use the Azure MySQL database you created in Azure. These properties are exposed to the web application as environment variables and override the values set in the application.properties inside the packaged web app. + +Set application settings using [az webapp config appsettings](https://docs.microsoft.com/cli/azure/appservice/web/config/appsettings) in the CLI: -The following command sets the Java version to 8, the Java App Server to Jetty, and the Jetty version to Newest Jetty 9.3. +```azurecli +az webapp config appsettings set --settings \ +SPRING_DATASOURCE_URL="jdbc:mysql://.mysql.database.azure.com:3306/tododb?verifyServerCertificate=true&useSSL=true&requireSSL=false" \ +--resource-group myResourceGroup \ +--name app_name +``` ```azurecli -az appservice web config update --name --resource-group myResourceGroup --java-version 1.8 --java-container Jetty --java-container-version 9.3 +az webapp config appsettings set --settings \ +SPRING_DATASOURCE_USERNAME=Javaapp_user@mysql_server_name \ +--resource-group myResourceGroup \ +--name app_name ``` +```azurecli +az webapp config appsettings set --settings \ +SPRING_DATASOURCE_URL=Javaapp_password \ +--resource-group myResourceGroup \ +--name app_name +``` -### Get credentials for deployment to the Web App using FTP +### Get FTP deployment credentials You can deploy your application to Azure appservice in various ways including FTP, local Git, GitHub, Visual Studio Team Services, and BitBucket. -For this example, we use Maven to compile a .WAR file and FTP to deploy the .WAR file to the Web App +For this example, FTP to deploy the .WAR file built previously on your local machine to Azure App Service. To determine what credentials to pass along in an ftp command to the Web App, Use [az appservice web deployment list-publishing-profiles](https://docs.microsoft.com/cli/azure/appservice/web/deployment#list-publishing-profiles) command: ```azurecli +az webapp deployment list-publishing-profiles \ +--name \ +--resource-group myResourceGroup \ +--query "[?publishMethod=='FTP'].{URL:publishUrl, Username:userName,Password:userPWD}" \ +--output json +``` -az appservice web deployment list-publishing-profiles --name --resource-group myResourceGroup --query "[?publishMethod=='FTP'].{URL:publishUrl, Username:userName,Password:userPWD}" --o table - +```JSON +[ + { + "Password": "aBcDeFgHiJkLmNoPqRsTuVwXyZ", + "URL": "ftp://waws-prod-blu-069.ftp.azurewebsites.windows.net/site/wwwroot", + "Username": "app_name\\$app_name" + } +] ``` -### Compile the local application to deply to the Web App -To prepare the local Java application to run on the Azure Web App, recompile all the resources in the Java application into a single .WAR file ready for deployment. Navigate to the directory where the applications pom.xml is located, and type: +### Upload the app using FTP -```bash -mvn clean package -``` -Toward the end of the Maven package process, notice the location of the .WAR file. The output should look like this: +Use your favorite FTP tool to deploy the .WAR file to the */site/wwwroot/webapps* folder on the server address taken from the `URL` field in the previous command. Remove the existing default (ROOT) application directory and replace the existing ROOT.war with the .WAR file built in the earlier in the tutorial. ```bash +ftp waws-prod-blu-069.ftp.azurewebsites.windows.net +Connected to waws-prod-blu-069.drip.azurewebsites.windows.net. +220 Microsoft FTP Service +Name (waws-prod-blu-069.ftp.azurewebsites.windows.net:raisa): app_name\$app_name +331 Password required +Password: +cd /site/wwwroot/webapps +mdelete -i ROOT/* +rmdir ROOT/ +put target/TodoDemo-0.0.1-SNAPSHOT.war ROOT.war +``` -[INFO] Processing war project -[INFO] Copying webapp resources [local-location\GitHub\mysql-java-todo-app\WebContent] -[INFO] Webapp assembled in [1519 msecs] -[INFO] Building war: C:\Users\your\localGitHub\mysql-java-todo-app\target\azure-appservice-mysql-java-sample-0.0.1-SNAPSHOT.war -[INFO] ------------------------------------------------------------------------ -[INFO] BUILD SUCCESS -[INFO] ------------------------------------------------------------------------ +### Test the web app -``` +Browse to `http://.azurewebsites.net/` and add a few tasks to the list. -Note the location of the .War file, and use your favorite FTP method to deploy the .WAR file to the Jetty WebApps folder. In this example, the Jetty WebApps folder is located at /site/wwwroot/webapps in an Azure Web App. +![Java app running in Azure appservice](./media/app-service-web-tutorial-java-mysql/appservice-web-app.png) -### Browse to the Azure web app +**Congratulations!** You're running a data-driven Java app in Azure App Service. -Browse to `http://.azurewebsites.net/` and add a few tasks to the list. +## Update the app and redeploy -![Java app running in Azure appservice](./media/app-service-web-tutorial-java-mysql/appservice-web-app.png) +Update the application to include an additional column in the todo list for what day the item was created. Spring Boot handles updating the database schema for you as the data model changes without altering your existing database records. + +1. On your local system, open up *src/main/java/com/example/fabrikam/TodoItem.java* and add the following imports to the class: + ```java + import java.text.SimpleDateFormat; + import java.util.Calendar; + ``` -**Congratulations!** You're running a data-driven Java app in Azure appservice. -To update the app, repeat the maven clean package command and redeploy the app via FTP. +2. Add a `String` property `timeCreated` to *src/main/java/com/example/fabrikam/TodoItem.java*, initializing it with a timestamp at object creation. Add getters/setters for the new `timeCreated` property while you are editing this file. + + ```java + private String name; + private boolean complete; + private String timeCreated; + ... + + public TodoItem(String category, String name) { + this.category = category; + this.name = name; + this.complete = false; + this.timeCreated = new SimpleDateFormat("MMMM dd, YYYY").format(Calendar.getInstance().getTime()); + } + ... + public void setTimeCreated(String timeCreated) { + this.timeCreated = timeCreated; + } + + public String getTimeCreated() { + return timeCreated; + } + ``` + +3. Update *src/main/java/com/example/fabrikam/TodoDemoController.java* with a line in the `updateTodo` method to set the timestamp: + + ```java + item.setComplete(requestItem.isComplete()); + item.setId(requestItem.getId()); + item.setTimeCreated(requestItem.getTimeCreated()); + repository.save(item); + ``` + +4. Add support for the new field in the Thymeleaf template. Update *src/main/resources/templates/index.html* with a new table header for the timestamp, and a new field to display the value of the timestamp in each table data row. + + ```html + Name + Category + Time Created + Complete + ... + item_category + item_time_created + + ``` + +5. Rebuild the application: + + ```bash + mvnw clean package + ``` + +6. FTP the updated .WAR as before, removing the existing *site/wwwroot/webapps/ROOT* directory and *ROOT.war*, then uploading the updated .WAR file as ROOT.war. + +When you refresh the app, a **Time Created** column is now visible. When you add a new task, the app will populate the timestamp automatically. Your existing tasks remain unchanged and work with the app even though the underlying data model has changed. + +![Java app updated with a new column](./media/app-service-web-tutorial-java-mysql/appservice-updates-java.png) + +## Stream diagnostic logs + +While your Java application runs in Azure App Service, you can get the console logs piped directly to your terminal. That way, you can get the same diagnostic messages to help you debug application errors. + +To start log streaming, use the [az webapp log tail](/cli/azure/appservice/web/log#tail) command. + +```azurecli +az webapp log tail \ + --name \ + --resource-group myResourceGroup +``` ## Manage your Azure web app -Go to the Azure portal to see the web app you created by signing in to [https://portal.azure.com](https://portal.azure.com). -From the left menu, click **appservice**, then click the name of your Azure web app. +Go to the Azure portal to see the web app you created. + +To do this, sign in to [https://portal.azure.com](https://portal.azure.com). -You should now be in your web app's _blade_ (a portal page that opens horizontally). +From the left menu, click **App Service**, then click the name of your Azure web app. -By default, your web app's blade shows the **Overview** page. This page gives you a view of how your app is doing. Here, you can also perform basic management tasks like browse, stop, start, restart, and delete. The tabs on the left side of the blade show the different configuration pages you can open. +![Portal navigation to Azure web app](./media/app-service-web-tutorial-java-mysql/access-portal.png) -In the **Application Settings** page, +By default, your web app's blade shows the **Overview** page. This page gives you a view of how your app is doing. Here, you can also perform management tasks like stop, start, restart, and delete. The tabs on the left side of the blade show the different configuration pages you can open. -![Azure appservice Web App Application Settings](./media/app-service-web-tutorial-java-mysql/appservice-web-app-application-settings.png) +![App Service blade in Azure portal](./media/app-service-web-tutorial-java-mysql/web-app-blade.png) These tabs in the blade show the many great features you can add to your web app. The following list gives you just a few of the possibilities: * Map a custom DNS name @@ -417,7 +460,27 @@ These tabs in the blade show the many great features you can add to your web app * Scale up and out * Add user authentication -## More resources -- [Map an existing custom DNS name to Azure Web Apps](app-service-web-tutorial-custom-domain.md) -- [Bind an existing custom SSL certificate to Azure Web Apps](app-service-web-tutorial-custom-ssl.md) -- [Web apps CLI scripts](app-service-cli-samples.md) +## Clean up resources + +If you don't need these resources for another tutorial (see [Next steps](#next)), you can delete them by running the following command:  +   +```azurecli  +az group delete --name myResourceGroup  +```  + + + +## Next steps + +> [!div class="checklist"] +> * Create a MySQL database in Azure +> * Connect a sample Java app to the MySQL +> * Deploy the app to Azure +> * Update and redeploy the app +> * Stream diagnostic logs from Azure +> * Manage the app in the Azure portal + +Advance to the next tutorial to learn how to map a custom DNS name to the app. + +> [!div class="nextstepaction"] +> [Map an existing custom DNS name to Azure Web Apps](app-service-web-tutorial-custom-domain.md) \ No newline at end of file diff --git a/articles/app-service-web/choose-web-site-cloud-service-vm.md b/articles/app-service-web/choose-web-site-cloud-service-vm.md index 48b611215bec7..82abd25c68c24 100644 --- a/articles/app-service-web/choose-web-site-cloud-service-vm.md +++ b/articles/app-service-web/choose-web-site-cloud-service-vm.md @@ -28,7 +28,7 @@ Service Fabric is a good choice if you’re creating a new app or re-writing an If you have an existing application that would require substantial modifications to run in App Service or Service Fabric, you could choose Virtual Machines in order to simplify migrating to the cloud. However, correctly configuring, securing, and maintaining VMs requires much more time and IT expertise compared to Azure App Service and Service Fabric. If you are considering Azure Virtual Machines, make sure you take into account the ongoing maintenance effort required to patch, update, and manage your VM environment. Azure Virtual Machines is Infrastructure-as-a-Service (IaaS), while App Service and Service Fabric are Platform-as-a-Service (Paas). ## Feature Comparison -The following table compares the capabilities of App Service, Cloud Services, Virtual Machines, and Service Fabric to help you make the best choice. For current information about the SLA for each option, see [Azure Service Level Agreements](/support/legal/sla/). +The following table compares the capabilities of App Service, Cloud Services, Virtual Machines, and Service Fabric to help you make the best choice. For current information about the SLA for each option, see [Azure Service Level Agreements](https://azure.microsoft.com/support/legal/sla/). | Feature | App Service (web apps) | Cloud Services (web roles) | Virtual Machines | Service Fabric | Notes | | --- | --- | --- | --- | --- | --- | @@ -51,8 +51,8 @@ The following table compares the capabilities of App Service, Cloud Services, Vi | Visual Studio integration |X |X |X |X | | | Remote Debugging |X |X |X | | | | Deploy code with TFS |X |X |X |X | | -| Network isolation with [Azure Virtual Network](/services/virtual-network/) |X |X |X |X |See also [Azure Websites Virtual Network Integration](https://azure.microsoft.com/blog/2014/09/15/azure-websites-virtual-network-integration/) | -| Support for [Azure Traffic Manager](/services/traffic-manager/) |X |X |X |X | | +| Network isolation with [Azure Virtual Network](/azure/virtual-network/) |X |X |X |X |See also [Azure Websites Virtual Network Integration](https://azure.microsoft.com/blog/2014/09/15/azure-websites-virtual-network-integration/) | +| Support for [Azure Traffic Manager](/azure/traffic-manager/) |X |X |X |X | | | Integrated Endpoint Monitoring |X |X |X | | | | Remote desktop access to servers | |X |X |X | | | Install any custom MSI | |X |X |X |Service Fabric allows you to host any executable file as a [guest executable](../service-fabric/service-fabric-deploy-existing-app.md) or you can install any app on the VMs. | @@ -139,7 +139,7 @@ If your open source framework is supported on App Service, the languages and fra If your open source framework is not supported on App Service, you can run it on one of the other Azure web hosting options. With Virtual Machines, you install and configure the software on the machine image, which can be Windows or Linux-based. ### I have a line-of-business application that needs to connect to the corporate network -If you want to create a line-of-business application, your website might require direct access to services or data on the corporate network. This is possible on App Service, Service Fabric, and Virtual Machines using the [Azure Virtual Network service](/services/virtual-network/). On App Service you can use the [VNET integration feature](https://azure.microsoft.com/blog/2014/09/15/azure-websites-virtual-network-integration/), which allows your Azure applications to run as if they were on your corporate network. +If you want to create a line-of-business application, your website might require direct access to services or data on the corporate network. This is possible on App Service, Service Fabric, and Virtual Machines using the [Azure Virtual Network service](/azure/virtual-network/). On App Service you can use the [VNET integration feature](https://azure.microsoft.com/blog/2014/09/15/azure-websites-virtual-network-integration/), which allows your Azure applications to run as if they were on your corporate network. ### I want to host a REST API or web service for mobile clients HTTP-based web services enable you to support a wide variety of clients, including mobile clients. Frameworks like ASP.NET Web API integrate with Visual Studio to make it easier to create and consume REST services. These services are exposed from a web endpoint, so it is possible to use any web hosting technique on Azure to support this scenario. However, App Service is a great choice for hosting REST APIs. With App Service, you can: @@ -157,7 +157,7 @@ HTTP-based web services enable you to support a wide variety of clients, includi ## Next Steps For more information about the three web hosting options, see [Introducing Azure](../fundamentals-introduction-to-azure.md). -To get started with the option(s) you choose for your application, see the following resources: +To get started with the chosen options for your application, see the following resources: * [Azure App Service](/azure/app-service/) * [Azure Cloud Services](/azure/cloud-services/) @@ -166,22 +166,22 @@ To get started with the option(s) you choose for your application, see the follo -[Azure App Service]: /services/app-service/ -[Cloud Services]: http://go.microsoft.com/fwlink/?LinkId=306052 -[Virtual Machines]: http://go.microsoft.com/fwlink/?LinkID=306053 -[Service Fabric]: /services/service-fabric +[Azure App Service]: /azure/app-service/ +[Cloud Services]: /azure/cloud-services/ +[Virtual Machines]: /azure/virtual-machines/ +[Service Fabric]: /azure/service-fabric/ [ClearDB]: http://www.cleardb.com/ [WebJobs]: http://go.microsoft.com/fwlink/?linkid=390226&clcid=0x409 -[Configuring an SSL certificate for an Azure Website]: http://www.windowsazure.com/develop/net/common-tasks/enable-ssl-web-site/ -[azurestore]: http://www.windowsazure.com/gallery/store/ -[scripting]: http://www.windowsazure.com/documentation/scripts/?services=web-sites -[dotnet]: http://www.windowsazure.com/develop/net/ -[nodejs]: http://www.windowsazure.com/develop/nodejs/ -[PHP]: http://www.windowsazure.com/develop/php/ -[Python]: http://www.windowsazure.com/develop/python/ -[servicebus]: http://www.windowsazure.com/documentation/services/service-bus/ -[sqldatabase]: http://www.windowsazure.com/documentation/services/sql-database/ -[Storage]: http://www.windowsazure.com/documentation/services/storage/ +[Configuring an SSL certificate for an Azure Website]: app-service-web-tutorial-custom-ssl.md +[azurestore]: https://azuremarketplace.microsoft.com/en-us/marketplace/apps +[scripting]: https://azure.microsoft.com/documentation/scripts/?services=web-sites +[dotnet]: https://azure.microsoft.com/develop/net/ +[nodejs]: https://azure.microsoft.com/develop/nodejs/ +[PHP]: https://azure.microsoft.com/develop/php/ +[Python]: https://azure.microsoft.com/develop/python/ +[servicebus]: /azure/service-bus/ +[sqldatabase]: /azure/sql-database/ +[Storage]: /azure/storage/ diff --git a/articles/app-service-web/media/app-service-web-tutorial-java-mysql/access-portal.png b/articles/app-service-web/media/app-service-web-tutorial-java-mysql/access-portal.png new file mode 100644 index 0000000000000..2a1247127e202 Binary files /dev/null and b/articles/app-service-web/media/app-service-web-tutorial-java-mysql/access-portal.png differ diff --git a/articles/app-service-web/media/app-service-web-tutorial-java-mysql/appservice-updates-java.png b/articles/app-service-web/media/app-service-web-tutorial-java-mysql/appservice-updates-java.png new file mode 100644 index 0000000000000..33f3550b0ae1e Binary files /dev/null and b/articles/app-service-web/media/app-service-web-tutorial-java-mysql/appservice-updates-java.png differ diff --git a/articles/app-service-web/media/app-service-web-tutorial-java-mysql/appservice-web-app.png b/articles/app-service-web/media/app-service-web-tutorial-java-mysql/appservice-web-app.png index 83846b5d084fd..a8672338fae71 100644 Binary files a/articles/app-service-web/media/app-service-web-tutorial-java-mysql/appservice-web-app.png and b/articles/app-service-web/media/app-service-web-tutorial-java-mysql/appservice-web-app.png differ diff --git a/articles/app-service-web/media/app-service-web-tutorial-java-mysql/web-app-blade.png b/articles/app-service-web/media/app-service-web-tutorial-java-mysql/web-app-blade.png new file mode 100644 index 0000000000000..984a18b9d24c4 Binary files /dev/null and b/articles/app-service-web/media/app-service-web-tutorial-java-mysql/web-app-blade.png differ diff --git a/articles/application-gateway/application-gateway-diagnostics.md b/articles/application-gateway/application-gateway-diagnostics.md index 80948483e31d8..c1bb0a54e67dc 100644 --- a/articles/application-gateway/application-gateway-diagnostics.md +++ b/articles/application-gateway/application-gateway-diagnostics.md @@ -18,7 +18,7 @@ ms.date: 01/17/2017 ms.author: amitsriva --- -# Backend health, diagnostics logging and metrics for Application Gateway +# Backend health, diagnostics logging, and metrics for Application Gateway Azure provides the capability to monitor resources with logging and metrics. Application Gateway provides these capabilities with backend health, logging, and metrics. @@ -38,10 +38,10 @@ Application gateway provides the capability to monitor the health of individual ### View backend health through the portal -There is nothing that is needed to be done to view backend health. In an existing application gateway, navigate to **Monitoring** > **Backend health**. Each member in the backend pool is listed on this page (whether it is a NIC, IP or FQDN). Backend pool name, port, backend http settings name and health status are shown. Valid values for health status are "Healthy", "Unhealthy" and "Unknown". +Backend health is provided automatically. In an existing application gateway, navigate to **Monitoring** > **Backend health**. Each member in the backend pool is listed on this page (whether it is a NIC, IP, or FQDN). Backend pool name, port, backend http settings name, and health status are shown. Valid values for health status are "Healthy", "Unhealthy" and "Unknown". > [!WARNING] -> If you see a backend health status as **Unknown**, ensure that the access to backend is not blocked by a Network Security Group (NSG) rule or by a custom DNS in the VNet. +> If you see a backend health status as **Unknown**, ensure that the access to backend is not blocked by a Network Security Group (NSG) rule, User defined route (UDR), or by a custom DNS in the VNet. ![backend health][10] @@ -161,33 +161,60 @@ This log (formerly known as the "operational log") is generated by Azure by defa This log is only generated if you've enabled it on a per Application Gateway basis as detailed in the preceding steps. The data is stored in the storage account you specified when you enabled the logging. Each access of Application Gateway is logged in JSON format, as seen in the following example: + +|Value |Description | +|---------|---------| +|instanceId | Application Gateway instance that served the request. | +|clientIP | Originating IP for the request. | +|clientPort | Originating port for the request. | +|httpMethod | The HTTP method used by the request. | +|requestUri | URI of the request received. | +|RequestQuery | **Server-Routed** - Backend pool instance that was sent the request
**X-AzureApplicationGateway-LOG-ID** - Correlation ID used for the request, can be used to troubleshoot traffic issues on the backend servers.
**SERVER-STATUS** - The HTTP response code Application Gateway received from the backend. | +|UserAgent | User-agent from the HTTP request header. | +|httpStatus | HTTP status code returned to the client from the Application Gateway. | +|httpVersion | HTTP version of the request. | +|receivedBytes | Size of packet received in bytes. | +| sentBytes|Size of packet sent in bytes.| +|timeTaken|The length of time (in milliseconds) that it takes for a request to be processed and its response to be sent. This is calculated as time interval from when Application Gateway receives the first byte of an HTTP request, to the time when response send operation completes. It is important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. | +|sslEnabled|Whether communication to the backend pools used SSL. Valid values are on or off.| ```json { - "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}", + "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/PEERINGTEST/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}", "operationName": "ApplicationGatewayAccess", - "time": "2016-04-11T04:24:37Z", + "time": "2017-04-26T19:27:38Z", "category": "ApplicationGatewayAccessLog", "properties": { - "instanceId":"ApplicationGatewayRole_IN_0", - "clientIP":"37.186.113.170", - "clientPort":"12345", - "httpMethod":"HEAD", - "requestUri":"/xyz/portal", - "requestQuery":"", - "userAgent":"-", - "httpStatus":"200", - "httpVersion":"HTTP/1.0", - "receivedBytes":"27", - "sentBytes":"202", - "timeTaken":"359", - "sslEnabled":"off" + "instanceId": "ApplicationGatewayRole_IN_0", + "clientIP": "191.96.249.97", + "clientPort": 46886, + "httpMethod": "GET", + "requestUri": "/phpmyadmin/scripts/setup.php", + "requestQuery": "X-AzureApplicationGateway-CACHE-HIT=0&SERVER-ROUTED=10.4.0.4&X-AzureApplicationGateway-LOG-ID=874f1f0f-6807-41c9-b7bc-f3cfa74aa0b1&SERVER-STATUS=404", + "userAgent": "-", + "httpStatus": 404, + "httpVersion": "HTTP/1.0", + "receivedBytes": 65, + "sentBytes": 553, + "timeTaken": 205, + "sslEnabled": "off" } } ``` ### Performance log -This log is only generated if you have enabled it on a per Application Gateway basis as detailed in the preceding steps. The data is stored in the storage account you specified when you enabled the logging. The following data is logged: +This log is only generated if you have enabled it on a per Application Gateway basis as detailed in the preceding steps. The data is stored in the storage account you specified when you enabled the logging. The performance log data is generated on 1 minute intervals. The following data is logged: + + +|Value |Description | +|---------|---------| +|instanceId | Application Gateway instance for which performance data is being generated. For a multi-instance application gateway, there is 1 row per instance. | +|healthyHostCount | Number of healthy hosts in the backend pool | +|unHealthyHostCount | Number of unhealthy hosts in the backend pool. | +|requestCount | Number of requests served. | +|latency | Latency (in milliseconds) of requests from the instance to backend serving the requests. | +|failedRequestCount| Number of failed requests.| +|throughput|Average throughput since the last log measured in bytes per second.| ```json { @@ -215,6 +242,25 @@ This log is only generated if you have enabled it on a per Application Gateway b This log is only generated if you have enabled it on a per application gateway basis as detailed in the preceding steps. This log also requires that web application firewall is configured on an application gateway. The data is stored in the storage account you specified when you enabled the logging. The following data is logged: + +|Value |Description | +|---------|---------| +|instanceId | Application Gateway instance for which firewall data is being generated. For a multi-instance application gateway, there is 1 row per instance. | +|clientIp | Originating IP for the request. | +|clientPort | Originating port for the request. | +|requestUri | URL of the request received. | +|ruleSetType | Rule set type. Available values: OWASP. | +|ruleSetVersion | Rule set version used. Available values are 2.2.9 or 3.0. | +|ruleId | Rule ID of the triggering event. | +|message | User friendly message for the triggering event. More details are provided in the details section. | +|action | Action taken on request Available values are Blocked or Allowed. | +|site | The site for which the log was generated. Currently only Global is listed since rules are global.| +|details | Details of the triggering event. | +|details.message | Description of the rule. | +|details.data | Specific data found in request that matched the rule. | +|details.file | The configuration file that contained the rule. | +|details.line | The line number in the configuration file that triggered the event. | + ```json { "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}", diff --git a/articles/application-gateway/media/application-gateway-diagnostics/diagnostics2.png b/articles/application-gateway/media/application-gateway-diagnostics/diagnostics2.png index 33ebac92e4a02..5e9c2b41e097e 100644 Binary files a/articles/application-gateway/media/application-gateway-diagnostics/diagnostics2.png and b/articles/application-gateway/media/application-gateway-diagnostics/diagnostics2.png differ diff --git a/articles/application-gateway/media/application-gateway-diagnostics/figure2.png b/articles/application-gateway/media/application-gateway-diagnostics/figure2.png index 6d7d89aafce06..d34664a5b2ce3 100644 Binary files a/articles/application-gateway/media/application-gateway-diagnostics/figure2.png and b/articles/application-gateway/media/application-gateway-diagnostics/figure2.png differ diff --git a/articles/application-gateway/media/application-gateway-diagnostics/figure3.png b/articles/application-gateway/media/application-gateway-diagnostics/figure3.png index a942489c351f3..8d8fceb18e053 100644 Binary files a/articles/application-gateway/media/application-gateway-diagnostics/figure3.png and b/articles/application-gateway/media/application-gateway-diagnostics/figure3.png differ diff --git a/articles/application-gateway/media/application-gateway-diagnostics/figure4.png b/articles/application-gateway/media/application-gateway-diagnostics/figure4.png index 30e807ab7be54..f211bcb711285 100644 Binary files a/articles/application-gateway/media/application-gateway-diagnostics/figure4.png and b/articles/application-gateway/media/application-gateway-diagnostics/figure4.png differ diff --git a/articles/application-gateway/media/application-gateway-diagnostics/figure7.png b/articles/application-gateway/media/application-gateway-diagnostics/figure7.png index 486af9d9b48a9..98a31eb3eacc3 100644 Binary files a/articles/application-gateway/media/application-gateway-diagnostics/figure7.png and b/articles/application-gateway/media/application-gateway-diagnostics/figure7.png differ diff --git a/articles/application-gateway/media/application-gateway-diagnostics/figure8.png b/articles/application-gateway/media/application-gateway-diagnostics/figure8.png index 0b4da6c20c2bc..a5f6bb382339b 100644 Binary files a/articles/application-gateway/media/application-gateway-diagnostics/figure8.png and b/articles/application-gateway/media/application-gateway-diagnostics/figure8.png differ diff --git a/articles/automation/TOC.md b/articles/automation/TOC.md index f5981a070b320..4a5e60b142075 100644 --- a/articles/automation/TOC.md +++ b/articles/automation/TOC.md @@ -1,4 +1,4 @@ -# Overview +# Overview ## [What is Azure Automation?](automation-intro.md) # Get started ## [Get started with Azure Automation](automation-offering-get-started.md) @@ -11,7 +11,7 @@ ### [Create standalone Automation account](automation-create-standalone-account.md) ### [Create Azure AD User account](automation-create-aduser-account.md) ### [Configure Authentication with AWS](automation-config-aws-account.md) -### [Create Azure Run As account with PowerShell](automation-update-account-powershell.md) +### [Create Automation Run As account](automation-create-runas-account.md) ### [Validate Automation account config](automation-verify-runas-authentication.md) ### [Manage role-based access control](automation-role-based-access-control.md) ### [Manage Automation account](automation-manage-account.md) @@ -56,12 +56,12 @@ ### [Remediate Azure VM alert](automation-azure-vm-alert-integration.md) ### [Start/stop VM with JSON Tags](automation-scenario-start-stop-vm-wjson-tags.md) ### [Remove Resource Group](automation-scenario-remove-resourcegroup.md) -### [Start/stop VMs during off-hours](automation-solution-vm-management.md) ### [Source control integration with GitHub Enterprise](automation-scenario-source-control-integration-with-github-ent.md) ### [Source control integration with VSTS](automation-scenario-source-control-integration-with-VSTS.md) ## Solutions ### [Change Tracking](../log-analytics/log-analytics-change-tracking.md) ### [Update Management](../operations-management-suite/oms-solution-update-management.md) +### [Start/stop VMs during off-hours](automation-solution-vm-management.md) ## Monitor ### [Forward Azure Automation job data to Log Analytics](automation-manage-send-joblogs-log-analytics.md) ### [Unlink Azure Automation account from Log Analytics](automation-unlink-from-log-analytics.md) diff --git a/articles/automation/automation-create-runas-account.md b/articles/automation/automation-create-runas-account.md new file mode 100644 index 0000000000000..69ea1e1e42979 --- /dev/null +++ b/articles/automation/automation-create-runas-account.md @@ -0,0 +1,290 @@ +--- +title: Create Azure Automation Run As accounts | Microsoft Docs +description: This article describes how to update your Automation account and create Run As accounts with PowerShell, or from the portal. +services: automation +documentationcenter: '' +author: mgoedtel +manager: carmonm +editor: '' + +ms.assetid: +ms.service: automation +ms.workload: tbd +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: get-started-article +ms.date: 06/01/2017 +ms.author: magoedte +--- + +# Update your Automation account authentication with Run As accounts +You can update your existing Automation account from the portal or use PowerShell if: + +* You create an Automation account but decline to create the Run As account. +* You already use an Automation account to manage Resource Manager resources and you want to update the account to include the Run As account for runbook authentication. +* You already use an Automation account to manage classic resources and you want to update it to use the Classic Run As account instead of creating a new account and migrating your runbooks and assets to it. +* You want to create a Run As and a Classic Run As account by using a certificate issued by your enterprise certification authority (CA). + +## Prerequisites + +* The script can be run only on Windows 10 and Windows Server 2016 with Azure Resource Manager modules 3.0.0 and later. It is not supported on earlier versions of Windows. +* Azure PowerShell 1.0 and later. For information about the PowerShell 1.0 release, see [How to install and configure Azure PowerShell](/powershell/azureps-cmdlets-docs). +* An Automation account, which is referenced as the value for the *–AutomationAccountName* and *-ApplicationDisplayName* parameters in the following PowerShell script. + +To get the values for *SubscriptionID*, *ResourceGroup*, and *AutomationAccountName*, which are required parameters for the script, do the following: + +1. In the Azure portal, select your Automation account on the **Automation account** blade, and then select **All settings**. +2. On the **All settings** blade, under **Account Settings**, select **Properties**. +3. Note the values on the **Properties** blade.

![The Automation account "Properties" blade](media/automation-create-runas-account/automation-account-properties.png) + +## Create Run As account from the portal +In this section, perform the following steps to update your Azure Automation account from the Azure portal. You create the Run As and Classic Run As accounts individually, and if you don't need to manage resources in the classic Azure portal, you can just create the Azure Run As account. + +The process creates the following items in your Automation account. + +**For Run As accounts:** + +* Creates an Azure AD application with a self-signed certificate, creates a service principal account for the application in Azure AD, and assigns the Contributor role for the account in your current subscription. You can change this setting to Owner or any other role. For more information, see [Role-based access control in Azure Automation](automation-role-based-access-control.md). +* Creates an Automation certificate asset named *AzureRunAsCertificate* in the specified Automation account. The certificate asset holds the certificate private key that's used by the Azure AD application. +* Creates an Automation connection asset named *AzureRunAsConnection* in the specified Automation account. The connection asset holds the applicationId, tenantId, subscriptionId, and certificate thumbprint. + +**For Classic Run As accounts:** + +* Creates an Automation certificate asset named *AzureClassicRunAsCertificate* in the specified Automation account. The certificate asset holds the certificate private key used by the management certificate. +* Creates an Automation connection asset named *AzureClassicRunAsConnection* in the specified Automation account. The connection asset holds the subscription name, subscriptionId, and certificate asset name. + + +1. Sign in to the Azure portal with an account that is a member of the Subscription Admins role and co-administrator of the subscription. +2. From the Automation account blade, select **Run As Accounts** under the section **Account Settings**. +3. Depending on which account you require, select either **Azure Run As Account** or **Azure Classic Run As Account**. After selecting either the **Add Azure Run As** or **Add Azure Classic Run As Account** blade appears and after reviewing the overview information, click **Create** to proceed with Run As account creation. +4. While Azure creates the Run As account, you can track the progress under **Notifications** from the menu and a banner is displayed stating the account is being created. This process can take a few minutes to complete. + +## Create Run As account using PowerShell script +This PowerShell script includes support for the following configurations: + +* Create a Run As account by using a self-signed certificate. +* Create a Run As account and a Classic Run As account by using a self-signed certificate. +* Create a Run As account and a Classic Run As account by using an enterprise certificate. +* Create a Run As account and a Classic Run As account by using a self-signed certificate in the Azure Government cloud. + +Depending on the configuration option you select, the script creates the following items. + +**For Run As accounts:** + +* Creates an Azure AD application to be exported with either the self-signed or enterprise certificate public key, creates a service principal account for the application in Azure AD, and assigns the Contributor role for the account in your current subscription. You can change this setting to Owner or any other role. For more information, see [Role-based access control in Azure Automation](automation-role-based-access-control.md). +* Creates an Automation certificate asset named *AzureRunAsCertificate* in the specified Automation account. The certificate asset holds the certificate private key that's used by the Azure AD application. +* Creates an Automation connection asset named *AzureRunAsConnection* in the specified Automation account. The connection asset holds the applicationId, tenantId, subscriptionId, and certificate thumbprint. + +**For Classic Run As accounts:** + +* Creates an Automation certificate asset named *AzureClassicRunAsCertificate* in the specified Automation account. The certificate asset holds the certificate private key used by the management certificate. +* Creates an Automation connection asset named *AzureClassicRunAsConnection* in the specified Automation account. The connection asset holds the subscription name, subscriptionId, and certificate asset name. + +>[!NOTE] +> If you select either option for creating a Classic Run As account, after the script is executed, upload the public certificate (.cer file name extension) to the management store for the subscription that the Automation account was created in. +> + +1. Save the following script on your computer. In this example, save it with the filename *New-RunAsAccount.ps1*. + + #Requires -RunAsAdministrator + Param ( + [Parameter(Mandatory=$true)] + [String] $ResourceGroup, + + [Parameter(Mandatory=$true)] + [String] $AutomationAccountName, + + [Parameter(Mandatory=$true)] + [String] $ApplicationDisplayName, + + [Parameter(Mandatory=$true)] + [String] $SubscriptionId, + + [Parameter(Mandatory=$true)] + [Boolean] $CreateClassicRunAsAccount, + + [Parameter(Mandatory=$true)] + [String] $SelfSignedCertPlainPassword, + + [Parameter(Mandatory=$false)] + [String] $EnterpriseCertPathForRunAsAccount, + + [Parameter(Mandatory=$false)] + [String] $EnterpriseCertPlainPasswordForRunAsAccount, + + [Parameter(Mandatory=$false)] + [String] $EnterpriseCertPathForClassicRunAsAccount, + + [Parameter(Mandatory=$false)] + [String] $EnterpriseCertPlainPasswordForClassicRunAsAccount, + + [Parameter(Mandatory=$false)] + [ValidateSet("AzureCloud","AzureUSGovernment")] + [string]$EnvironmentName="AzureCloud", + + [Parameter(Mandatory=$false)] + [int] $SelfSignedCertNoOfMonthsUntilExpired = 12 + ) + + function CreateSelfSignedCertificate([string] $keyVaultName, [string] $certificateName, [string] $selfSignedCertPlainPassword, + [string] $certPath, [string] $certPathCer, [string] $selfSignedCertNoOfMonthsUntilExpired ) { + $Cert = New-SelfSignedCertificate -DnsName $certificateName -CertStoreLocation cert:\LocalMachine\My ` + -KeyExportPolicy Exportable -Provider "Microsoft Enhanced RSA and AES Cryptographic Provider" ` + -NotAfter (Get-Date).AddMonths($selfSignedCertNoOfMonthsUntilExpired) + + $CertPassword = ConvertTo-SecureString $selfSignedCertPlainPassword -AsPlainText -Force + Export-PfxCertificate -Cert ("Cert:\localmachine\my\" + $Cert.Thumbprint) -FilePath $certPath -Password $CertPassword -Force | Write-Verbose + Export-Certificate -Cert ("Cert:\localmachine\my\" + $Cert.Thumbprint) -FilePath $certPathCer -Type CERT | Write-Verbose + } + + function CreateServicePrincipal([System.Security.Cryptography.X509Certificates.X509Certificate2] $PfxCert, [string] $applicationDisplayName) { + $CurrentDate = Get-Date + $keyValue = [System.Convert]::ToBase64String($PfxCert.GetRawCertData()) + $KeyId = (New-Guid).Guid + + $KeyCredential = New-Object Microsoft.Azure.Commands.Resources.Models.ActiveDirectory.PSADKeyCredential + $KeyCredential.StartDate = $CurrentDate + $KeyCredential.EndDate= [DateTime]$PfxCert.GetExpirationDateString() + $KeyCredential.EndDate = $KeyCredential.EndDate.AddDays(-1) + $KeyCredential.KeyId = $KeyId + $KeyCredential.CertValue = $keyValue + + # Use key credentials and create an Azure AD application + $Application = New-AzureRmADApplication -DisplayName $ApplicationDisplayName -HomePage ("http://" + $applicationDisplayName) -IdentifierUris ("http://" + $KeyId) -KeyCredentials $KeyCredential + $ServicePrincipal = New-AzureRMADServicePrincipal -ApplicationId $Application.ApplicationId + $GetServicePrincipal = Get-AzureRmADServicePrincipal -ObjectId $ServicePrincipal.Id + + # Sleep here for a few seconds to allow the service principal application to become active (ordinarily takes a few seconds) + Sleep -s 15 + $NewRole = New-AzureRMRoleAssignment -RoleDefinitionName Contributor -ServicePrincipalName $Application.ApplicationId -ErrorAction SilentlyContinue + $Retries = 0; + While ($NewRole -eq $null -and $Retries -le 6) + { + Sleep -s 10 + New-AzureRMRoleAssignment -RoleDefinitionName Contributor -ServicePrincipalName $Application.ApplicationId | Write-Verbose -ErrorAction SilentlyContinue + $NewRole = Get-AzureRMRoleAssignment -ServicePrincipalName $Application.ApplicationId -ErrorAction SilentlyContinue + $Retries++; + } + return $Application.ApplicationId.ToString(); + } + + function CreateAutomationCertificateAsset ([string] $resourceGroup, [string] $automationAccountName, [string] $certifcateAssetName,[string] $certPath, [string] $certPlainPassword, [Boolean] $Exportable) { + $CertPassword = ConvertTo-SecureString $certPlainPassword -AsPlainText -Force + Remove-AzureRmAutomationCertificate -ResourceGroupName $resourceGroup -AutomationAccountName $automationAccountName -Name $certifcateAssetName -ErrorAction SilentlyContinue + New-AzureRmAutomationCertificate -ResourceGroupName $resourceGroup -AutomationAccountName $automationAccountName -Path $certPath -Name $certifcateAssetName -Password $CertPassword -Exportable:$Exportable | write-verbose + } + + function CreateAutomationConnectionAsset ([string] $resourceGroup, [string] $automationAccountName, [string] $connectionAssetName, [string] $connectionTypeName, [System.Collections.Hashtable] $connectionFieldValues ) { + Remove-AzureRmAutomationConnection -ResourceGroupName $resourceGroup -AutomationAccountName $automationAccountName -Name $connectionAssetName -Force -ErrorAction SilentlyContinue + New-AzureRmAutomationConnection -ResourceGroupName $ResourceGroup -AutomationAccountName $automationAccountName -Name $connectionAssetName -ConnectionTypeName $connectionTypeName -ConnectionFieldValues $connectionFieldValues + } + + Import-Module AzureRM.Profile + Import-Module AzureRM.Resources + + $AzureRMProfileVersion= (Get-Module AzureRM.Profile).Version + if (!(($AzureRMProfileVersion.Major -ge 3 -and $AzureRMProfileVersion.Minor -ge 0) -or ($AzureRMProfileVersion.Major -gt 3))) + { + Write-Error -Message "Please install the latest Azure PowerShell and retry. Relevant doc url : https://docs.microsoft.com/powershell/azureps-cmdlets-docs/ " + return + } + + Login-AzureRmAccount -Environment $EnvironmentName + $Subscription = Select-AzureRmSubscription -SubscriptionId $SubscriptionId + + # Create a Run As account by using a service principal + $CertifcateAssetName = "AzureRunAsCertificate" + $ConnectionAssetName = "AzureRunAsConnection" + $ConnectionTypeName = "AzureServicePrincipal" + + if ($EnterpriseCertPathForRunAsAccount -and $EnterpriseCertPlainPasswordForRunAsAccount) { + $PfxCertPathForRunAsAccount = $EnterpriseCertPathForRunAsAccount + $PfxCertPlainPasswordForRunAsAccount = $EnterpriseCertPlainPasswordForRunAsAccount + } else { + $CertificateName = $AutomationAccountName+$CertifcateAssetName + $PfxCertPathForRunAsAccount = Join-Path $env:TEMP ($CertificateName + ".pfx") + $PfxCertPlainPasswordForRunAsAccount = $SelfSignedCertPlainPassword + $CerCertPathForRunAsAccount = Join-Path $env:TEMP ($CertificateName + ".cer") + CreateSelfSignedCertificate $KeyVaultName $CertificateName $PfxCertPlainPasswordForRunAsAccount $PfxCertPathForRunAsAccount $CerCertPathForRunAsAccount $SelfSignedCertNoOfMonthsUntilExpired + } + + # Create a service principal + $PfxCert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2 -ArgumentList @($PfxCertPathForRunAsAccount, $PfxCertPlainPasswordForRunAsAccount) + $ApplicationId=CreateServicePrincipal $PfxCert $ApplicationDisplayName + + # Create the Automation certificate asset + CreateAutomationCertificateAsset $ResourceGroup $AutomationAccountName $CertifcateAssetName $PfxCertPathForRunAsAccount $PfxCertPlainPasswordForRunAsAccount $true + + # Populate the ConnectionFieldValues + $SubscriptionInfo = Get-AzureRmSubscription -SubscriptionId $SubscriptionId + $TenantID = $SubscriptionInfo | Select TenantId -First 1 + $Thumbprint = $PfxCert.Thumbprint + $ConnectionFieldValues = @{"ApplicationId" = $ApplicationId; "TenantId" = $TenantID.TenantId; "CertificateThumbprint" = $Thumbprint; "SubscriptionId" = $SubscriptionId} + + # Create an Automation connection asset named AzureRunAsConnection in the Automation account. This connection uses the service principal. + CreateAutomationConnectionAsset $ResourceGroup $AutomationAccountName $ConnectionAssetName $ConnectionTypeName $ConnectionFieldValues + + if ($CreateClassicRunAsAccount) { + # Create a Run As account by using a service principal + $ClassicRunAsAccountCertifcateAssetName = "AzureClassicRunAsCertificate" + $ClassicRunAsAccountConnectionAssetName = "AzureClassicRunAsConnection" + $ClassicRunAsAccountConnectionTypeName = "AzureClassicCertificate " + $UploadMessage = "Please upload the .cer format of #CERT# to the Management store by following the steps below." + [Environment]::NewLine + + "Log in to the Microsoft Azure Management portal (https://manage.windowsazure.com) and select Settings -> Management Certificates." + [Environment]::NewLine + + "Then click Upload and upload the .cer format of #CERT#" + + if ($EnterpriseCertPathForClassicRunAsAccount -and $EnterpriseCertPlainPasswordForClassicRunAsAccount ) { + $PfxCertPathForClassicRunAsAccount = $EnterpriseCertPathForClassicRunAsAccount + $PfxCertPlainPasswordForClassicRunAsAccount = $EnterpriseCertPlainPasswordForClassicRunAsAccount + $UploadMessage = $UploadMessage.Replace("#CERT#", $PfxCertPathForClassicRunAsAccount) + } else { + $ClassicRunAsAccountCertificateName = $AutomationAccountName+$ClassicRunAsAccountCertifcateAssetName + $PfxCertPathForClassicRunAsAccount = Join-Path $env:TEMP ($ClassicRunAsAccountCertificateName + ".pfx") + $PfxCertPlainPasswordForClassicRunAsAccount = $SelfSignedCertPlainPassword + $CerCertPathForClassicRunAsAccount = Join-Path $env:TEMP ($ClassicRunAsAccountCertificateName + ".cer") + $UploadMessage = $UploadMessage.Replace("#CERT#", $CerCertPathForClassicRunAsAccount) + CreateSelfSignedCertificate $KeyVaultName $ClassicRunAsAccountCertificateName $PfxCertPlainPasswordForClassicRunAsAccount $PfxCertPathForClassicRunAsAccount $CerCertPathForClassicRunAsAccount $SelfSignedCertNoOfMonthsUntilExpired + } + + # Create the Automation certificate asset + CreateAutomationCertificateAsset $ResourceGroup $AutomationAccountName $ClassicRunAsAccountCertifcateAssetName $PfxCertPathForClassicRunAsAccount $PfxCertPlainPasswordForClassicRunAsAccount $false + + # Populate the ConnectionFieldValues + $SubscriptionName = $subscription.Subscription.Name + $ClassicRunAsAccountConnectionFieldValues = @{"SubscriptionName" = $SubscriptionName; "SubscriptionId" = $SubscriptionId; "CertificateAssetName" = $ClassicRunAsAccountCertifcateAssetName} + + # Create an Automation connection asset named AzureRunAsConnection in the Automation account. This connection uses the service principal. + CreateAutomationConnectionAsset $ResourceGroup $AutomationAccountName $ClassicRunAsAccountConnectionAssetName $ClassicRunAsAccountConnectionTypeName $ClassicRunAsAccountConnectionFieldValues + + Write-Host -ForegroundColor red $UploadMessage + } + +2. On your computer, start **Windows PowerShell** from the **Start** screen with elevated user rights. +3. From the elevated command-line shell, go to the folder that contains the script you created in step 1. +4. Execute the script by using the parameter values for the configuration you require. + + **Create a Run As account by using a self-signed certificate** + `.\New-RunAsAccount.ps1 -ResourceGroup -AutomationAccountName -SubscriptionId -ApplicationDisplayName -SelfSignedCertPlainPassword -CreateClassicRunAsAccount $false` + + **Create a Run As account and a Classic Run As account by using a self-signed certificate** + `.\New-RunAsAccount.ps1 -ResourceGroup -AutomationAccountName -SubscriptionId -ApplicationDisplayName -SelfSignedCertPlainPassword -CreateClassicRunAsAccount $true` + + **Create a Run As account and a Classic Run As account by using an enterprise certificate** + `.\New-RunAsAccount.ps1 -ResourceGroup -AutomationAccountName -SubscriptionId -ApplicationDisplayName -SelfSignedCertPlainPassword -CreateClassicRunAsAccount $true -EnterpriseCertPathForRunAsAccount -EnterpriseCertPlainPasswordForRunAsAccount -EnterpriseCertPathForClassicRunAsAccount -EnterpriseCertPlainPasswordForClassicRunAsAccount ` + + **Create a Run As account and a Classic Run As account by using a self-signed certificate in the Azure Government cloud** + `.\New-RunAsAccount.ps1 -ResourceGroup -AutomationAccountName -SubscriptionId -ApplicationDisplayName -SelfSignedCertPlainPassword -CreateClassicRunAsAccount $true -EnvironmentName AzureUSGovernment` + + > [!NOTE] + > After the script has executed, you will be prompted to authenticate with Azure. Sign in with an account that is a member of the subscription administrators role and co-administrator of the subscription. + > + > + +After the script has executed successfully, note the following: +* If you created a Classic Run As account with a self-signed public certificate (.cer file), the script creates and saves it to the temporary files folder on your computer under the user profile *%USERPROFILE%\AppData\Local\Temp*, which you used to execute the PowerShell session. +* If you created a Classic Run As account with an enterprise public certificate (.cer file), use this certificate. Follow the instructions for [uploading a management API certificate to the Azure classic portal](../azure-api-management-certs.md), and then validate the credential configuration with classic deployment resources by using the [sample code to authenticate with Azure Classic Deployment Resources](automation-verify-runas-authentication.md#classic-run-as-authentication). +* If you did *not* create a Classic Run As account, authenticate with Resource Manager resources and validate the credential configuration by using the [sample code for authenticating with Service Management resources](automation-verify-runas-authentication.md#automation-run-as-authentication). + +## Next steps +* For more information about Service Principals, refer to [Application Objects and Service Principal Objects](../active-directory/active-directory-application-objects.md). +* For more information about certificates and Azure services, refer to [Certificates overview for Azure Cloud Services](../cloud-services/cloud-services-certs-create.md). \ No newline at end of file diff --git a/articles/automation/automation-offering-get-started.md b/articles/automation/automation-offering-get-started.md index 1b54f9975fe79..9d2d50ba0facf 100644 --- a/articles/automation/automation-offering-get-started.md +++ b/articles/automation/automation-offering-get-started.md @@ -1,25 +1,25 @@ ---- +--- title: Get Started with Azure Automation | Microsoft Docs -description: This article provides an overview of Azure Automation service by reviewing the core concepts and implementation details in preparation to onboard the offering from Auzre Marketplace. +description: This article provides an overview of Azure Automation service by reviewing the core concepts and implementation details in preparation to onboard the offering from Azure Marketplace. services: automation documentationcenter: '' author: mgoedtel manager: carmonm editor: '' -ms.assetid: +ms.assetid: ms.service: automation ms.workload: tbd ms.tgt_pltfrm: na ms.devlang: na ms.topic: get-started-article -ms.date: 05/02/2017 +ms.date: 06/01/2017 ms.author: magoedte --- ## Getting Started with Azure Automation -This getting started guide introduces core concepts related to the deployment of Azure Automation. If you are new to Automation in Azure or have experience with automation workflow software like System Center Orchestrator, this guide helps you get started with concepts and deployment details. +This getting started guide introduces core concepts related to the deployment of Azure Automation. If you are new to Automation in Azure or have experience with automation workflow software like System Center Orchestrator, this guide helps you get started with concepts and deployment details. ## Key concepts @@ -78,14 +78,14 @@ When designating a computer to run hybrid runbook jobs, this computer must have ## Security Azure Automation allows you to automate tasks against resources in Azure, on-premises, and with other cloud providers. In order for a runbook to perform its required actions, it must have permissions to securely access the resources with the minimal rights required within the subscription. -### Automation Account +### Automation Account All the automation tasks you perform against resources using the Azure cmdlets in Azure Automation authenticate to Azure using Azure Active Directory organizational identity credential-based authentication. An Automation account is separate from the account you use to sign in to the portal to configure and use Azure resources. The Automation resources for each Automation account are associated with a single Azure region, but Automation accounts can manage all the resources in your subscription. Create Automation accounts in different regions if you have policies that require data and resources to be isolated to a specific region. > [!NOTE] > Automation accounts, and the resources they contain that are created in the Azure portal, cannot be accessed in the Azure classic portal. If you want to manage these accounts or their resources with Windows PowerShell, you must use the Azure Resource Manager modules. -> +> When you create an Automation account in the Azure portal, you automatically create two authentication entities: @@ -97,17 +97,17 @@ Role-based access control is available with Azure Resource Manager to grant perm #### Authentication methods The following table summarizes the different authentication methods for each environment supported by Azure Automation. -| Method | Environment -| --- | --- | +| Method | Environment +| --- | --- | | Azure Run As and Classic Run As account |Azure Resource Manager and Azure classic deployment | | Azure AD User account |Azure Resource Manager and Azure classic deployment | | Windows authentication |Local data center or other cloud provider using the Hybrid Runbook Worker | | AWS credentials |Amazon Web Services | -Under the **How to\Authentication and Security** section, are supporting articles providing overview and implementation steps to configure authentication for those environments, either with an existing or new account you dedicate for that environment. For the Azure Run As and Classic Run As account, the topic [Update Automation Run As account using PowerShell](automation-update-account-powershell.md) describes how to update your existing Automation account with the Run As accounts using PowerShell if it was not originally configured with a Run As or Classic Run As account. - +Under the **How to\Authentication and security** section, are supporting articles providing overview and implementation steps to configure authentication for those environments, either with an existing or new account you dedicate for that environment. For the Azure Run As and Classic Run As account, the topic [Update Automation Run As account](automation-create-runas-account.md) describes how to update your existing Automation account with the Run As accounts from the portal or using PowerShell if it was not originally configured with a Run As or Classic Run As account. If you want to create a Run As and a Classic Run As account with a certificate issued by your enterprise certification authority (CA), review this article to learn how to create the accounts using this configuration. + ## Network -For the Hybrid Runbook Worker to connect to and register with Microsoft Operations Management Suite (OMS), it must have access to the port number and the URLs described below. This is in addition to the [ports and URLs required for the Microsoft Monitoring Agent](../log-analytics/log-analytics-windows-agents.md) to connect to OMS. If you use a proxy server for communication between the agent and the OMS service, you need to ensure that the appropriate resources are accessible. If you use a firewall to restrict access to the Internet, you need to configure your firewall to permit access. +For the Hybrid Runbook Worker to connect to and register with Microsoft Operations Management Suite (OMS), it must have access to the port number and the URLs described below. This is in addition to the [ports and URLs required for the Microsoft Monitoring Agent](../log-analytics/log-analytics-windows-agents.md#network) to connect to OMS. If you use a proxy server for communication between the agent and the OMS service, you need to ensure that the appropriate resources are accessible. If you use a firewall to restrict access to the Internet, you need to configure your firewall to permit access. The information below list the port and URLs that are required for the Hybrid Runbook Worker to communicate with Automation. @@ -131,11 +131,11 @@ If you have an Automation account defined for a specific region and you want to | UK South | uks-jobruntimedata-prod-su1.azure-automation.net | | US Gov Virginia | usge-jobruntimedata-prod-su1.azure-automation.us | -For a list of IP addresses instead of names, download and review the [Azure Datacenter IP address](https://www.microsoft.com/download/details.aspx?id=41653) xml file from the Microsoft Download Center. +For a list of IP addresses instead of names, download and review the [Azure Datacenter IP address](https://www.microsoft.com/download/details.aspx?id=41653) xml file from the Microsoft Download Center. > [!NOTE] -> This file contains the IP address ranges (including Compute, SQL and Storage ranges) used in the Microsoft Azure Datacenters. An updated file is posted weekly which reflects the currently deployed ranges and any upcoming changes to the IP ranges. New ranges appearing in the file will not be used in the datacenters for at least one week. Please download the new xml file every week and perform the necessary changes on your site to correctly identify services running in Azure. Express Route users may note this file used to update the BGP advertisement of Azure space in the first week of each month. -> +> This file contains the IP address ranges (including Compute, SQL and Storage ranges) used in the Microsoft Azure Datacenters. An updated file is posted weekly which reflects the currently deployed ranges and any upcoming changes to the IP ranges. New ranges appearing in the file will not be used in the datacenters for at least one week. Please download the new xml file every week and perform the necessary changes on your site to correctly identify services running in Azure. Express Route users may note this file used to update the BGP advertisement of Azure space in the first week of each month. +> ## Implementation @@ -146,13 +146,13 @@ There are different ways you can create an Automation account in the Azure porta |Method | Description | |-------|-------------| -| Select Automation & Control from the Marketplace | An offering, which creates both an Automation account and OMS workspace linked to one another in the same resource group and region. It also deploys the Change Tracking & Update Management solutions, which are enabled by default. | +| Select Automation & Control from the Marketplace | An offering, which creates both an Automation account and OMS workspace linked to one another in the same resource group and region. Integration with OMS also includes the benefit of using Log Analytics to monitor and analyze runbook job status and job streams over time and utilize advanced features to escalate or investigate issues. The offering also deploys the Change Tracking & Update Management solutions, which are enabled by default. | | Select Automation from the Marketplace | Creates an Automation account in a new or existing resource group that is not linked to an OMS workspace and does not include any available solutions from the Automation & Control offering. This is a basic configuration that introduces you to Automation and can help you learn how to write runbooks, DSC configurations, and use the capabilities of the service. | | Selected Management solutions | If you select a solution – **[Update Management](../operations-management-suite/oms-solution-update-management.md)**, **[Start/Stop VMs during off hours](automation-solution-vm-management.md)**, or **[Change Tracking](../log-analytics/log-analytics-change-tracking.md)** they prompt you to select an existing Automation and OMS workspace, or offer you the option to create both as required for the solution to be deployed in your subscription. | This topic walks you through creating an Automation account and OMS workspace by onboarding the Automation & Control offering. To create a standalone Automation account for testing or to preview the service, review the following article [Create standalone Automation account](automation-create-standalone-account.md). -### Create Automation account integrated with Log Analytics +### Create Automation account integrated with OMS The recommended method to onboard Automation is by selecting the Automation & Control offering from the Marketplace. This creates both an Automation account and establishes the integration with an OMS workspace, including the option to install the management solutions that are available with the offering. >[!NOTE] @@ -170,16 +170,16 @@ The recommended method to onboard Automation is by selecting the Automation & Co 4. After reading the description for the offering, click **Create**. -5. On the **Automation & Control** settings blade, select **OMS Workspace**. On the **OMS Workspaces** blade, select an OMS workspace linked to the same Azure subscription that the Automation account is in or create a OMS workspace. If you do not have an OMS workspace, select **Create New Workspace** and on the **OMS Workspace** blade perform the following: +5. On the **Automation & Control** settings blade, select **OMS Workspace**. On the **OMS Workspaces** blade, select an OMS workspace linked to the same Azure subscription that the Automation account is in or create an OMS workspace. If you do not have an OMS workspace, select **Create New Workspace** and on the **OMS Workspace** blade perform the following: - Specify a name for the new **OMS Workspace**. - Select a **Subscription** to link to by selecting from the drop-down list if the default selected is not appropriate. - For **Resource Group**, you can create a resource group or select an existing resource group. - Select a **Location**. Currently the only locations available are **Australia Southeast**, **East US**, **Southeast Asia**, **West Central US**, and **West Europe**. - Select a **Pricing tier**. The solution is offered in two tiers: free and Per Node (OMS) tier. The free tier has a limit on the amount of data collected daily, retention period, and runbook job runtime minutes. The Per Node (OMS) tier does not have a limit on the amount of data collected daily. - - Select **Automation Account**. If you are creating a new OMS workspace, you are required to also create an Automation account that is associated with the new OMS workspace specified earlier, including your Azure subscription, resource group, and region. You can select **Create an Automation account** and on the **Automation Account** blade, provide the following: + - Select **Automation Account**. If you are creating a new OMS workspace, you are required to also create an Automation account that is associated with the new OMS workspace specified earlier, including your Azure subscription, resource group, and region. You can select **Create an Automation account** and on the **Automation Account** blade, provide the following: - In the **Name** field, enter the name of the Automation account. - All other options are automatically populated based on the OMS workspace selected and these options cannot be modified. An Azure Run As account is the default authentication method for the offering. Once you click **OK**, the configuration options are validated and the Automation account is created. You can track its progress under **Notifications** from the menu. + All other options are automatically populated based on the OMS workspace selected and these options cannot be modified. An Azure Run As account is the default authentication method for the offering. Once you click **OK**, the configuration options are validated and the Automation account is created. You can track its progress under **Notifications** from the menu. Otherwise, select an existing Automation Run As account. The account you select cannot already be linked to another OMS workspace, otherwise a notification message is presented in the blade. If it is already linked, you need to select a different Automation Run As account or create one. @@ -189,7 +189,7 @@ The recommended method to onboard Automation is by selecting the Automation & Co 7. On the **Automation & Control** settings blade, confirm you want to install the recommended pre-selected solutions. If you deselect any, you can install them individually later. -8. Click **Create** to proceed with onboarding Automation and an OMS workspace. All settings are validated and then it attempts to deploy the offering in your subscription. This process can take several seconds to complete and you can track its progress under **Notifications** from the menu. +8. Click **Create** to proceed with onboarding Automation and an OMS workspace. All settings are validated and then it attempts to deploy the offering in your subscription. This process can take several seconds to complete and you can track its progress under **Notifications** from the menu. After the offering is onboarded, you can begin creating runbooks, working with the management solutions you enabled, or start working with [Log Analytics](https://docs.microsoft.com/azure/log-analytics) to collect data generated by resources in your cloud or on-premises environments. @@ -197,3 +197,4 @@ After the offering is onboarded, you can begin creating runbooks, working with t * You can confirm your new Automation account can authenticate against Azure resources by reviewing [test Azure Automation Run As account authentication](automation-verify-runas-authentication.md). * To get started with PowerShell runbooks, see [My first PowerShell runbook](automation-first-runbook-textual-powershell.md). * To learn more about Graphical Authoring, see [Graphical authoring in Azure Automation](automation-graphical-authoring-intro.md). + diff --git a/articles/automation/automation-runbook-types.md b/articles/automation/automation-runbook-types.md index efb112baaf10f..9b3ba3473692e 100644 --- a/articles/automation/automation-runbook-types.md +++ b/articles/automation/automation-runbook-types.md @@ -13,7 +13,7 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: infrastructure-services -ms.date: 11/11/2016 +ms.date: 06/01/2017 ms.author: bwren --- @@ -48,7 +48,7 @@ PowerShell runbooks are based on Windows PowerShell. You directly edit the code ### Advantages * Implement all complex logic with PowerShell code without the additional complexities of PowerShell Workflow. -* Runbook starts faster than Graphical or PowerShell Workflow runbooks since it doesn't need to be compiled before running. +* Runbook starts faster than PowerShell Workflow runbooks since it doesn't need to be compiled before running. ### Limitations * Must be familiar with PowerShell scripting. diff --git a/articles/automation/automation-solution-vm-management.md b/articles/automation/automation-solution-vm-management.md index a974b73a9f881..1c2d3fa20a3a2 100644 --- a/articles/automation/automation-solution-vm-management.md +++ b/articles/automation/automation-solution-vm-management.md @@ -13,7 +13,7 @@ ms.workload: infrastructure-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 05/18/2017 +ms.date: 06/01/2017 ms.author: magoedte --- @@ -52,23 +52,23 @@ Variable | Description| **SendMailO365-MS-Mgmt** Runbook || SendMailO365-IsSendEmail-MS-Mgmt | Specifies if StartByResourceGroup-MS-Mgmt-VM and StopByResourceGroup-MS-Mgmt-VM runbooks can send email notification upon completion. Select **True** to enable and **False** to disable email alerting. Default value is **False**.| **StartByResourceGroup-MS-Mgmt-VM** Runbook || -StartByResourceGroup-ExcludeList-MS-Mgmt-VM | Enter VM names to be excluded from management operation; separate names by using semi-colon(;). Values are case-sensitive and wildcard (asterisk) is supported.| +StartByResourceGroup-ExcludeList-MS-Mgmt-VM | Enter VM names to be excluded from management operation; separate names by using semi-colon(;) with no spaces. Values are case-sensitive and wildcard (asterisk) is supported.| StartByResourceGroup-SendMailO365-EmailBodyPreFix-MS-Mgmt | Text that can be appended to the beginning of the email message body.| StartByResourceGroup-SendMailO365-EmailRunBookAccount-MS-Mgmt | Specifies the name of the Automation Account that contains the Email runbook. **Do not modify this variable.**| StartByResourceGroup-SendMailO365-EmailRunbookName-MS-Mgmt | Specifies the name of the email runbook. This is used by the StartByResourceGroup-MS-Mgmt-VM and StopByResourceGroup-MS-Mgmt-VM runbooks to send email. **Do not modify this variable.**| StartByResourceGroup-SendMailO365-EmailRunbookResourceGroup-MS-Mgmt | Specifies the name of the Resource group that contains the Email runbook. **Do not modify this variable.**| StartByResourceGroup-SendMailO365-EmailSubject-MS-Mgmt | Specifies the text for the subject line of the email.| -StartByResourceGroup-SendMailO365-EmailToAddress-MS-Mgmt | Specifies the recipient(s) of the email. Enter separate names by using semi-colon(;).| -StartByResourceGroup-TargetResourceGroups-MS-Mgmt-VM | Enter VM names to be excluded from management operation; separate names by using semi-colon(;). Values are case-sensitive and wildcard (asterisk) is supported. Default value (asterisk) will include all resource groups in the subscription.| +StartByResourceGroup-SendMailO365-EmailToAddress-MS-Mgmt | Specifies the recipient(s) of the email. Enter separate names by using semi-colon(;) with no spaces.| +StartByResourceGroup-TargetResourceGroups-MS-Mgmt-VM | Enter VM names to be excluded from management operation; separate names by using semi-colon(;) with no spaces. Values are case-sensitive and wildcard (asterisk) is supported. Default value (asterisk) will include all resource groups in the subscription.| StartByResourceGroup-TargetSubscriptionID-MS-Mgmt-VM | Specifies the subscription that contains VMs to be managed by this solution. This must be the same subscription where the Automation account of this solution resides.| **StopByResourceGroup-MS-Mgmt-VM** Runbook || -StopByResourceGroup-ExcludeList-MS-Mgmt-VM | Enter VM names to be excluded from management operation; separate names by using semi-colon(;). Values are case-sensitive and wildcard (asterisk) is supported.| +StopByResourceGroup-ExcludeList-MS-Mgmt-VM | Enter VM names to be excluded from management operation; separate names by using semi-colon(;) with no spaces. Values are case-sensitive and wildcard (asterisk) is supported.| StopByResourceGroup-SendMailO365-EmailBodyPreFix-MS-Mgmt | Text that can be appended to the beginning of the email message body.| StopByResourceGroup-SendMailO365-EmailRunBookAccount-MS-Mgmt | Specifies the name of the Automation Account that contains the Email runbook. **Do not modify this variable.**| StopByResourceGroup-SendMailO365-EmailRunbookResourceGroup-MS-Mgmt | Specifies the name of the Resource group that contains the Email runbook. **Do not modify this variable.**| StopByResourceGroup-SendMailO365-EmailSubject-MS-Mgmt | Specifies the text for the subject line of the email.| -StopByResourceGroup-SendMailO365-EmailToAddress-MS-Mgmt | Specifies the recipient(s) of the email. Enter separate names by using semi-colon(;).| -StopByResourceGroup-TargetResourceGroups-MS-Mgmt-VM | Enter VM names to be excluded from management operation; separate names by using semi-colon(;). Values are case-sensitive and wildcard (asterisk) is supported. Default value (asterisk) will include all resource groups in the subscription.| +StopByResourceGroup-SendMailO365-EmailToAddress-MS-Mgmt | Specifies the recipient(s) of the email. Enter separate names by using semi-colon(;) with no spaces.| +StopByResourceGroup-TargetResourceGroups-MS-Mgmt-VM | Enter VM names to be excluded from management operation; separate names by using semi-colon(;) with no spaces. Values are case-sensitive and wildcard (asterisk) is supported. Default value (asterisk) will include all resource groups in the subscription.| StopByResourceGroup-TargetSubscriptionID-MS-Mgmt-VM | Specifies the subscription that contains VMs to be managed by this solution. This must be the same subscription where the Automation account of this solution resides.|
diff --git a/articles/automation/automation-update-azure-modules.md b/articles/automation/automation-update-azure-modules.md index 2fba224f4b983..48d82b9379e0b 100644 --- a/articles/automation/automation-update-azure-modules.md +++ b/articles/automation/automation-update-azure-modules.md @@ -13,7 +13,7 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: infrastructure-services -ms.date: 04/20/2017 +ms.date: 06/01/2017 ms.author: magoedte --- @@ -22,6 +22,8 @@ ms.author: magoedte The most common Azure PowerShell modules are provided by default in each Automation account. The Azure team updates the Azure modules regularly, so in the Automation account we provide a way for you to update the modules in the account when new versions are available from the portal. +Because modules are updated regularly by the product group, changes can occur with the included cmdlets, which may negatively impact your runbooks depending on the type of change, such as renaming a parameter or deprecating a cmdlet entirely. To avoid impacting your runbooks and the processes they automate, it is strongly recommended that you test and validate before proceeding. If you do not have a dedicated Automation account intended for this purpose, consider creating one so that you can test many different scenarios and permutations during the development of your runbooks, in addition to iterative changes such as updating the PowerShell modules. After the results are validated and you have applied any changes required, proceed with coordinating the migration of any runbooks that required modification and perform the update as described below in production. + ## Updating Azure Modules 1. In the Modules blade of your Automation account there is an option called **Update Azure Modules**. It is always enabled.

![Update Azure Modules option in Modules blade](media/automation-update-azure-modules/automation-update-azure-modules-option.png) @@ -41,10 +43,13 @@ The most common Azure PowerShell modules are provided by default in each Automat If the modules are already up to date, then the process will complete in a few seconds. When the update process completes you will be notified.

![Update Azure Modules update status](media/automation-update-azure-modules/automation-update-azure-modules-updatestatus.png) -Whenever you create a schedule, any subsequent jobs running on that schedule use the modules in the Automation Account at the time the schedule was created. To start using updated modules with your scheduled runbooks, you will need to unlink and re-link the schedule with that runbook. +> [!NOTE] +> Whenever you create a schedule, any subsequent jobs running on that schedule use the modules in the Automation Account at the time the schedule was created. To start using updated modules with your scheduled runbooks, you will need to unlink and re-link the schedule with that runbook. If you use cmdlets from these Azure PowerShell modules in your runbooks to manage Azure resources, then you will want to perform this update process every month or so to assure that you have the latest modules. ## Next steps -To learn more about Integration Modules and how to create custom modules to further integrate Automation with other systems, services, or solutions, see [Integration Modules](automation-integration-modules.md). \ No newline at end of file +* To learn more about Integration Modules and how to create custom modules to further integrate Automation with other systems, services, or solutions, see [Integration Modules](automation-integration-modules.md). + +* Consider source control integration using [GitHub Enterprise](automation-scenario-source-control-integration-with-github-ent.md) or [Visual Studio Team Services](automation-scenario-source-control-integration-with-vsts.md) to centrally manage and control releases of your Automation runbook and configuration portfolio. \ No newline at end of file diff --git a/articles/automation/media/automation-create-runas-account/automation-account-properties.png b/articles/automation/media/automation-create-runas-account/automation-account-properties.png new file mode 100644 index 0000000000000..737ffa86c4be5 Binary files /dev/null and b/articles/automation/media/automation-create-runas-account/automation-account-properties.png differ diff --git a/articles/azure-stack/TOC.yml b/articles/azure-stack/TOC.yml index 0724136fb9357..93b05d718e4c5 100644 --- a/articles/azure-stack/TOC.yml +++ b/articles/azure-stack/TOC.yml @@ -7,6 +7,8 @@ items: - name: Deploy Azure Stack href: azure-stack-deploy-overview.md + - name: Install and configure PowerShell + href: azure-stack-powershell-configure-quickstart.md - name: Tutorials items: - name: Enable Virtual Machines diff --git a/articles/azure-stack/azure-stack-add-default-image.md b/articles/azure-stack/azure-stack-add-default-image.md index 8e477b35e25c3..334190967fa3f 100644 --- a/articles/azure-stack/azure-stack-add-default-image.md +++ b/articles/azure-stack/azure-stack-add-default-image.md @@ -67,7 +67,7 @@ After the download completes, the image it is added to the **Marketplace Managem ```PowerShell $TenantID = Get-DirectoryTenantID ` - -AADTenantName ".onmicrosoft.com" ` + -AADTenantName ".onmicrosoft.com" ` -EnvironmentName AzureStackAdmin ``` b. **Active Directory Federation Services**, use the following cmdlet: diff --git a/articles/azure-stack/azure-stack-add-vm-image.md b/articles/azure-stack/azure-stack-add-vm-image.md index 3d012b9633aad..3333299a2285d 100644 --- a/articles/azure-stack/azure-stack-add-vm-image.md +++ b/articles/azure-stack/azure-stack-add-vm-image.md @@ -52,7 +52,7 @@ If the virtual machine image is available locally on the Azure Stack POC compute ```PowerShell $TenantID = Get-DirectoryTenantID ` - -AADTenantName ".onmicrosoft.com" ` + -AADTenantName ".onmicrosoft.com" ` -EnvironmentName AzureStackAdmin ``` b. **Active Directory Federation Services**, use the following cmdlet: diff --git a/articles/azure-stack/azure-stack-connect-cli.md b/articles/azure-stack/azure-stack-connect-cli.md index 4454f0124fcd8..3138e133cff7c 100644 --- a/articles/azure-stack/azure-stack-connect-cli.md +++ b/articles/azure-stack/azure-stack-connect-cli.md @@ -125,7 +125,9 @@ az group create \ If the resource group is created successfully, the previous command outputs the following properties of the newly created resource: ![resource group create output](media/azure-stack-connect-cli/image1.png) - + +There are some known issues when using CLI 2.0 in Azure Stack, to learn about these issues, see the [Known issues in Azure Stack CLI](azure-stack-troubleshooting.md#cli) topic. + ## Next steps diff --git a/articles/azure-stack/azure-stack-powershell-configure-quickstart.md b/articles/azure-stack/azure-stack-powershell-configure-quickstart.md new file mode 100644 index 0000000000000..c1c81f5491020 --- /dev/null +++ b/articles/azure-stack/azure-stack-powershell-configure-quickstart.md @@ -0,0 +1,101 @@ +--- +title: Install and configure PowerShell for Azure Stack quickstart | Microsoft Docs +description: Learn about installing and configuring PowerShell for Azure Stack. +services: azure-stack +documentationcenter: '' +author: SnehaGunda +manager: byronr +editor: '' + +ms.assetid: +ms.service: azure-stack +ms.workload: na +pms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 06/01/2017 +ms.author: sngun + +--- + +# Install and configure PowerShell for Azure Stack quickstart + +This topic is a quick start to install and configure PowerShell for Azure Stack. It combines the steps described in [Install PowerShell]( azure-stack-powershell-install.md), [Download tools]( azure-stack-powershell-download.md), [Configure PowerShell]( azure-stack-powershell-configure.md) articles. We have scoped the steps in this topic for Azure Stack **administrator’s environment only**. You can also use this article for user environments, but make sure to replace the Azure Resource manager endpoint value. To learn about configuring PowerShell for user environment, see steps for user environment in [Configure PowerShell]( azure-stack-powershell-configure.md#configure-the-powershell-environment) topic. + +To install and configure PowerShell for administrator’s environment, open a PowerShell ISE session as an administrator and run the following script: + +```powershell + +# Set the module repository and the execution policy +Set-PSRepository ` + -Name "PSGallery" ` + -InstallationPolicy Trusted + +Set-ExecutionPolicy Unrestricted ` + -force + +# Uninstall any existing Azure PowerShell modules. To uninstall, close all the active PowerShell sessions and run the following command: +Get-Module -ListAvailable | ` + where-Object {$_.Name -like “Azure*”} | ` + Uninstall-Module + +# Install PowerShell for Azure Stack +Install-Module ` + -Name AzureRm.BootStrapper ` + -Force + +Use-AzureRmProfile ` + -Profile 2017-03-09-profile ` + -Force + +Install-Module ` + -Name AzureStack ` + -RequiredVersion 1.2.9 ` + -Force + +Import-Module ` + -Name AzureStack ` + -RequiredVersion 1.2.9 ` + -Force + +# Download Azure Stack tools from GitHub and import the connect module +cd \ + +invoke-webrequest ` + https://github.com/Azure/AzureStack-Tools/archive/master.zip ` + -OutFile master.zip + +expand-archive master.zip ` + -DestinationPath . ` + -Force + +cd AzureStack-Tools-master + +Import-Module ` + .\Connect\AzureStack.Connect.psm1 + +# Configure the administrator’s PowerShell environment. +Add-AzureStackAzureRmEnvironment ` + -Name "AzureStackAdmin" ` + -ArmEndpoint https://adminmanagement.local.azurestack.external + +$Credential= Get-Credential ` + -Message "Enter you Azure Active Directory service administrator's credentials. The username is in the format: user1@contoso.onmicrosoft.com" + +$TenantName = ($Credential.UserName.split("@"))[1] + +$TenantID = Get-DirectoryTenantID ` + -AADTenantName $TenantName ` + -EnvironmentName AzureStackAdmin + +# Sign-in to the administrative portal. +Login-AzureRmAccount ` + -EnvironmentName "AzureStackAdmin" ` + -TenantId $TenantID ` + -Credential $Credential + +# Register resource providers on all subscriptions +Register-AllAzureRmProvidersOnAllSubscriptions + +``` + diff --git a/articles/azure-stack/azure-stack-powershell-configure.md b/articles/azure-stack/azure-stack-powershell-configure.md index e25bf23dd699d..5e590b463b82c 100644 --- a/articles/azure-stack/azure-stack-powershell-configure.md +++ b/articles/azure-stack/azure-stack-powershell-configure.md @@ -64,12 +64,12 @@ Use the following steps to configure your Azure Stack environment: ```PowerShell # Use this command to get the GUID value in the administrator's environment. $TenantID = Get-DirectoryTenantID ` - -AADTenantName ".onmicrosoft.com" ` + -AADTenantName ".onmicrosoft.com" ` -EnvironmentName AzureStackAdmin # Use this command to get the GUID value in the user's environment. $TenantID = Get-DirectoryTenantID ` - -AADTenantName ".onmicrosoft.com" ` + -AADTenantName ".onmicrosoft.com" ` -EnvironmentName AzureStackUser ``` b. **Active Directory Federation Services**, use one of the following cmdlets: diff --git a/articles/azure-stack/azure-stack-troubleshooting.md b/articles/azure-stack/azure-stack-troubleshooting.md index 9006d1f586a00..8f9f9b170773f 100644 --- a/articles/azure-stack/azure-stack-troubleshooting.md +++ b/articles/azure-stack/azure-stack-troubleshooting.md @@ -103,6 +103,17 @@ When connecting to tenant subscriptions with PowerShell, you will notice that th Get-AzureRMResourceProvider | Register-AzureRmResourceProvider +## CLI + +* The CLI interactive mode i.e the `az interactive` command is not yet supported in Azure Stack. + +* To get the list of virtual machine images available in Azure Stack, use the `az vm images list --all` command instead of the `az vm image list` command. Specifying the `--all` option makes sure that response returns only the images that are available in your Azure Stack environment. + +* Virtual machine image aliases that are available in Azure may not be applicable to Azure Stack. When using virtual machine images, you must use the entire URN parameter (Canonical:UbuntuServer:14.04.3-LTS:1.0.0) instead of the image alias. And this URNmust match the image specifications as derived from the `az vm images list` command. + +* By default, CLI 2.0 uses “Standard_DS1_v2” as the default virtual machine image size. However, this size is not yet available in Azure Stack, so, you need to specify the `--size` parameter explicitly when creating a virtual machine. You can get the list of virtual machine sizes that are available in Azure Stack by using the `az vm list-sizes --location ` command. + + ## Windows Azure Pack Connector * If you change the password of the azurestackadmin account after you deploy Azure Stack TP3, you can no longer configure multi-cloud mode. Therefore, it won't be possible to connect to the target Windows Azure Pack environment. * After you set up multi-cloud mode: diff --git a/articles/biztalk-services/TOC.md b/articles/biztalk-services/TOC.md index 42d439db14046..ac8e0b951625a 100644 --- a/articles/biztalk-services/TOC.md +++ b/articles/biztalk-services/TOC.md @@ -1,6 +1,5 @@ # Overview ## [Editions](biztalk-editions-feature-chart.md) -## [About Hybrid Connections](integration-hybrid-connection-overview.md) # Get Started ## [Create BizTalk Services](biztalk-provision-services.md) @@ -12,8 +11,6 @@ ## Configure ### [Throttling](biztalk-throttling-thresholds.md) ### [Service settings and monitoring](biztalk-dashboard-monitor-scale-tabs.md) -## Integrate -### [Hybrid Connections](integration-hybrid-connection-create-manage.md) ## Migrate ### [BizTalk Server EDI solutions to BizTalk Services](biztalk-migrating-to-edi-guide.md) ## Monitor @@ -21,6 +18,9 @@ ## Secure ### [Access control](biztalk-issuer-name-issuer-key.md) ## [Troubleshoot](biztalk-troubleshoot-using-ops-logs.md) +## Hybrid connections +### [Overview](integration-hybrid-connection-overview.md) +### [Create and manage](integration-hybrid-connection-create-manage.md) # Resources ## [Release notes](biztalk-release-notes.md) diff --git a/articles/biztalk-services/integration-hybrid-connection-create-manage.md b/articles/biztalk-services/integration-hybrid-connection-create-manage.md index 42a6843d82f23..36317e645ce5f 100644 --- a/articles/biztalk-services/integration-hybrid-connection-create-manage.md +++ b/articles/biztalk-services/integration-hybrid-connection-create-manage.md @@ -18,6 +18,11 @@ ms.author: ccompy --- # Create and Manage Hybrid Connections + +> [!IMPORTANT] +> BizTalk Hybrid Connections is retired, and replaced by App Service Hybrid Connections. For more information, including how to manage your existing BizTalk Hybrid Connections, see [Azure App Service Hybrid Connections](../app-service/app-service-hybrid-connections.md). + + ## Overview of the Steps 1. Create a Hybrid Connection by entering the **host name** or **FQDN** of the on-premises resource in your private network. 2. Link your Azure web apps or Azure mobile apps to the Hybrid Connection. diff --git a/articles/biztalk-services/integration-hybrid-connection-overview.md b/articles/biztalk-services/integration-hybrid-connection-overview.md index 02a17e7ece8c1..42fb2ed79b976 100644 --- a/articles/biztalk-services/integration-hybrid-connection-overview.md +++ b/articles/biztalk-services/integration-hybrid-connection-overview.md @@ -18,6 +18,10 @@ ms.author: ccompy --- # Hybrid Connections overview + +> [!IMPORTANT] +> BizTalk Hybrid Connections is retired, and replaced by App Service Hybrid Connections. For more information, including how to manage your existing BizTalk Hybrid Connections, see [Azure App Service Hybrid Connections](../app-service/app-service-hybrid-connections.md). + Introduction to Hybrid Connections, lists the supported configurations, and lists the required TCP ports. ## What is a hybrid connection diff --git a/articles/cognitive-services/Bing-Autosuggest/get-suggested-search-terms.md b/articles/cognitive-services/Bing-Autosuggest/get-suggested-search-terms.md index 74285f35d0fc9..bea6c9df65acb 100644 --- a/articles/cognitive-services/Bing-Autosuggest/get-suggested-search-terms.md +++ b/articles/cognitive-services/Bing-Autosuggest/get-suggested-search-terms.md @@ -25,7 +25,7 @@ The following example shows a request that returns the suggested query strings f GET https://api.cognitive.microsoft.com/bing/v5.0/suggestions?q=sail&mkt=en-us HTTP/1.1 Ocp-Apim-Subscription-Key: 123456789ABCDE X-MSEdge-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -38,7 +38,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/suggestions?q=sail&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` diff --git a/articles/cognitive-services/Bing-Autosuggest/quick-start.md b/articles/cognitive-services/Bing-Autosuggest/quick-start.md index 315078bc80966..8800fe23bf3bc 100644 --- a/articles/cognitive-services/Bing-Autosuggest/quick-start.md +++ b/articles/cognitive-services/Bing-Autosuggest/quick-start.md @@ -30,7 +30,9 @@ https://api.cognitive.microsoft.com/bing/v5.0/Suggestions > https://api.cognitive.microsoft.com/bing/v7.0/Suggestions > ``` -The request must use the HTTPS protocol, and all requests must be made from a server (calls may not be made from a client). +The request must use the HTTPS protocol. + +We recommend that all requests originate from a server. Distributing the key as part of a client application provides more opportunity for a malicious third-party to access it. Also, making calls from a server provides a single upgrade point for future versions of the API. The request must specify the [q](https://docs.microsoft.com/rest/api/cognitiveservices/bing-autosuggest-api-v5-reference#query) query parameter, which contains the user's partial search term. Although it's optional, the request should also specify the [mkt](https://docs.microsoft.com/rest/api/cognitiveservices/bing-autosuggest-api-v5-reference#mkt) query parameter, which identifies the market where you want the results to come from. For a list of optional query parameters, see [Query Parameters](https://docs.microsoft.com/rest/api/cognitiveservices/bing-autosuggest-api-v5-reference#query-parameters). All query parameter values must be URL encoded. @@ -55,7 +57,7 @@ The following example shows a request that returns the suggested query strings f GET https://api.cognitive.microsoft.com/bing/v5.0/suggestions?q=sail&mkt=en-us HTTP/1.1 Ocp-Apim-Subscription-Key: 123456789ABCDE X-MSEdge-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -67,7 +69,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/suggestions?q=sail&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` diff --git a/articles/cognitive-services/Bing-Image-Search/image-insights.md b/articles/cognitive-services/Bing-Image-Search/image-insights.md index d7a9c83bd6f7c..4687d33c75b3f 100644 --- a/articles/cognitive-services/Bing-Image-Search/image-insights.md +++ b/articles/cognitive-services/Bing-Image-Search/image-insights.md @@ -43,7 +43,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/images/search?q=sailing+dinghy Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-MSEdge-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -60,7 +60,7 @@ Host: api.cognitive.microsoft.com > Ocp-Apim-Subscription-Key: 123456789ABCDE > User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -78,7 +78,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/images/search?q=sailing+dinghy Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-MSEdge-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -91,7 +91,7 @@ Host: api.cognitive.microsoft.com > Ocp-Apim-Subscription-Key: 123456789ABCDE > User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -106,7 +106,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/images/search?q=digital+camera Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -203,7 +203,7 @@ The following is the response to the previous request. The top-level object is a > Ocp-Apim-Subscription-Key: 123456789ABCDE > User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -314,7 +314,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/images/search?modulesRequested Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -442,7 +442,7 @@ The following shows the response to the previous request. Because the image cont > Ocp-Apim-Subscription-Key: 123456789ABCDE > User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -505,7 +505,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/images/search?modulesRequested Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com @@ -556,7 +556,7 @@ The following shows the response to the previous request. > Ocp-Apim-Subscription-Key: 123456789ABCDE > User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -594,7 +594,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/images/search?modulesRequested Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -607,7 +607,7 @@ Host: api.cognitive.microsoft.com > Ocp-Apim-Subscription-Key: 123456789ABCDE > User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -668,7 +668,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/images/search?cal=0.5&cat=0.0& Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -681,7 +681,7 @@ Host: api.cognitive.microsoft.com > Ocp-Apim-Subscription-Key: 123456789ABCDE > User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -730,7 +730,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/images/search?q=anne+klein+dre Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -790,7 +790,7 @@ The following shows the response to the previous request. The response contains > Ocp-Apim-Subscription-Key: 123456789ABCDE > User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -824,7 +824,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/images/search?modulesRequested Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -873,7 +873,7 @@ The following is the response to the previous request. > Ocp-Apim-Subscription-Key: 123456789ABCDE > User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` diff --git a/articles/cognitive-services/Bing-Image-Search/quick-start.md b/articles/cognitive-services/Bing-Image-Search/quick-start.md index 39e32de1648bf..98bced029aaa6 100644 --- a/articles/cognitive-services/Bing-Image-Search/quick-start.md +++ b/articles/cognitive-services/Bing-Image-Search/quick-start.md @@ -30,8 +30,10 @@ https://api.cognitive.microsoft.com/bing/v5.0/images/search > https://api.cognitive.microsoft.com/bing/v7.0/images/search > ``` -The request must use the HTTPS protocol, and all requests must be made from a server (calls may not be made from a client). - +The request must use the HTTPS protocol. + +We recommend that all requests originate from a server. Distributing the key as part of a client application provides more opportunity for a malicious third-party to access it. Also, making calls from a server provides a single upgrade point for future versions of the API. + The request must specify the [q](https://docs.microsoft.com/rest/api/cognitiveservices/bing-images-api-v5-reference#query) query parameter, which contains the user's search term. Although it's optional, the request should also specify the [mkt](https://docs.microsoft.com/rest/api/cognitiveservices/bing-images-api-v5-reference#mkt) query parameter, which identifies the market where you want the results to come from. For a list of optional query parameters such as `freshness` and `size`, see [Query Parameters](https://docs.microsoft.com/rest/api/cognitiveservices/bing-images-api-v5-reference#query-parameters). All query parameter values must be URL encoded. The request must specify the [Ocp-Apim-Subscription-Key](https://docs.microsoft.com/rest/api/cognitiveservices/bing-images-api-v5-reference#subscriptionkey) header. Although optional, you are encouraged to also specify the following headers: @@ -54,7 +56,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/images/search?q=sailing+dinghi Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -66,7 +68,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/images/search?q=sailing+dinghies&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` diff --git a/articles/cognitive-services/Bing-Image-Search/search-the-web.md b/articles/cognitive-services/Bing-Image-Search/search-the-web.md index 7354591657bc8..b96db7217cec9 100644 --- a/articles/cognitive-services/Bing-Image-Search/search-the-web.md +++ b/articles/cognitive-services/Bing-Image-Search/search-the-web.md @@ -26,17 +26,6 @@ If you're requesting images from Bing, your user experience must provide a searc After the user enters their query term, you need to URL encode the term before setting the [q](https://docs.microsoft.com/rest/api/cognitiveservices/bing-images-api-v5-reference#query) query parameter. For example, if the user entered *sailing dinghies*, you would set `q` to *sailing+dinghies* or *sailing%20dinghies*. -If the query term contains a spelling mistake, the response includes a [QueryContext](https://docs.microsoft.com/rest/api/cognitiveservices/bing-images-api-v5-reference#querycontext) object. The object shows the original spelling and the corrected spelling that Bing used for the search. - -``` - "queryContext":{ - "originalQuery":"sialing dingies", - "alteredQuery":"sailing dinghies", - "alterationOverrideQuery":"+sialing dingies" - }, -``` - -You could use `originalQuery` and `alteredQuery` to let the user know the actual query term that Bing used. ## Getting images @@ -47,7 +36,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/images/search?q=sailing+dinghi Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -59,7 +48,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/images/search?q=sailing+dinghies&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -161,7 +150,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/images/search?q=sailing+dinghies+site:contososailing.com&size=small&freshness=week&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -195,7 +184,6 @@ The following example shows the pivot suggestions for *Microsoft Surface*. "text" : "Sony Surface", "displayText" : "Sony", "webSearchUrl" : "https:\/\/www.bing.com\/images\/search?q=Sony+Surface&FORM=IRQBPS", - "webSearchUrlPingSuffix" : "DevEx,5318.1", "searchLink" : "https:\/\/api.cognitive.microsoft.com\/api\/v5\/images\/search?q=...", "thumbnail" : { "thumbnailUrl" : "https:\/\/tse3.mm.bing.net\/th?q=Sony+Surface&pid=Ap..." @@ -210,7 +198,6 @@ The following example shows the pivot suggestions for *Microsoft Surface*. "text" : "Microsoft Surface4", "displayText" : "Surface2", "webSearchUrl" : "https:\/\/www.bing.com\/images\/search?q=Microsoft+Surface...", - "webSearchUrlPingSuffix" : "DevEx,5360.1", "searchLink" : "https:\/\/api.cognitive.microsoft.com\/api\/v5\/images\/search?...", "thumbnail" : { "thumbnailUrl" : "https:\/\/tse4.mm.bing.net\/th?q=Microsoft..." @@ -220,7 +207,6 @@ The following example shows the pivot suggestions for *Microsoft Surface*. "text" : "Microsoft Tablet", "displayText" : "Tablet", "webSearchUrl" : "https:\/\/www.bing.com\/images\/search?q=Microsoft+Tablet&FORM=IRQBPS", - "webSearchUrlPingSuffix" : "DevEx,5362.1", "searchLink" : "https:\/\/api.cognitive.microsoft.com\/api\/v5\/images\/search?...", "thumbnail" : { "thumbnailUrl" : "https:\/\/tse3.mm.bing.net\/th?q=Microsoft+Tablet..." diff --git a/articles/cognitive-services/Bing-Image-Search/trending-images.md b/articles/cognitive-services/Bing-Image-Search/trending-images.md index 72297f612bccc..8e547a0953d85 100644 --- a/articles/cognitive-services/Bing-Image-Search/trending-images.md +++ b/articles/cognitive-services/Bing-Image-Search/trending-images.md @@ -22,7 +22,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/images/trending?mkt=en-us HTTP Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-MSEdge-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -34,7 +34,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/images/trending?mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -107,7 +107,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/images/search?q=Smith&id=77FDE Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-MSEdge-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -120,7 +120,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/images/search?q=Smith&id=77FDE4A1C6529A23C7CF0EC073FAA64843E828F2&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` diff --git a/articles/cognitive-services/Bing-News-Search/quick-start.md b/articles/cognitive-services/Bing-News-Search/quick-start.md index 037739185d12c..994f22d483df1 100644 --- a/articles/cognitive-services/Bing-News-Search/quick-start.md +++ b/articles/cognitive-services/Bing-News-Search/quick-start.md @@ -30,8 +30,10 @@ https://api.cognitive.microsoft.com/bing/v5.0/news/search > https://api.cognitive.microsoft.com/bing/v7.0/news/search > ``` -The request must use the HTTPS protocol, and all requests must be made from a server (calls may not be made from a client). - +The request must use the HTTPS protocol. + +We recommend that all requests originate from a server. Distributing the key as part of a client application provides more opportunity for a malicious third-party to access it. Also, making calls from a server provides a single upgrade point for future versions of the API. + The request must specify the [q](https://docs.microsoft.com/rest/api/cognitiveservices/bing-news-api-v5-reference#query) query parameter, which contains the user's search term. Although it's optional, the request should also specify the [mkt](https://docs.microsoft.com/rest/api/cognitiveservices/bing-news-api-v5-reference#mkt) query parameter, which identifies the market where you want the results to come from. For a list of optional query parameters such as `freshness` and `textDecorations`, see [Query Parameters](https://docs.microsoft.com/rest/api/cognitiveservices/bing-news-api-v5-reference#query-parameters). All query parameter values must be URL encoded. The request must specify the [Ocp-Apim-Subscription-Key](https://docs.microsoft.com/rest/api/cognitiveservices/bing-news-api-v5-reference#subscriptionkey) header. Although optional, you are encouraged to also specify the following headers: @@ -54,7 +56,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/news/search?q=sailing+dinghies Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -66,7 +68,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/news/search?q=sailing+dinghies&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` diff --git a/articles/cognitive-services/Bing-News-Search/search-the-web.md b/articles/cognitive-services/Bing-News-Search/search-the-web.md index d0094b0dca575..444a4ca442794 100644 --- a/articles/cognitive-services/Bing-News-Search/search-the-web.md +++ b/articles/cognitive-services/Bing-News-Search/search-the-web.md @@ -26,17 +26,6 @@ If you're requesting general news from Bing, your user experience must provide a After the user enters their query term, you need to URL encode the term before setting the [q](https://docs.microsoft.com/rest/api/cognitiveservices/bing-news-api-v5-reference#query) query parameter. For example, if the user entered *sailing competitions*, you would set `q` to *sailing+competitions* or *sailing%20competitions*. -If the query term contains a spelling mistake, the search response includes a [QueryContext](https://docs.microsoft.com/rest/api/cognitiveservices/bing-news-api-v5-reference#querycontext) object. The object shows the original spelling and the corrected spelling that Bing used for the search. - -``` - "queryContext":{ - "originalQuery":"sialing competitions", - "alteredQuery":"sailing competitions", - "alterationOverrideQuery":"+sialing competitions" - }, -``` - -You could use `originalQuery` and `alteredQuery` to let the user know the actual query term that Bing used. ## General news @@ -47,7 +36,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/news/search?q=sailing+dinghies Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -59,7 +48,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/news/search?q=sailing+dinghies&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -135,7 +124,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/news/search?q=&mkt=en-us HTTP/ Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -147,7 +136,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/news/search?q=&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -163,7 +152,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/news?category=sports&mkt=en-us Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -175,7 +164,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/news?category=sports&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -194,7 +183,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/news?mkt=en-us HTTP/1.1 Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-MSEdge-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -206,7 +195,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/news?mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -229,7 +218,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/news/trendingtopics?mkt=en-us Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: X-MSAPI-UserState: Host: api.cognitive.microsoft.com @@ -242,7 +231,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/news/trendingtopics?mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` diff --git a/articles/cognitive-services/Bing-Spell-Check/proof-text.md b/articles/cognitive-services/Bing-Spell-Check/proof-text.md index da459c2501d09..ab5de29adc8a9 100644 --- a/articles/cognitive-services/Bing-Spell-Check/proof-text.md +++ b/articles/cognitive-services/Bing-Spell-Check/proof-text.md @@ -47,7 +47,7 @@ Content-Type: application/x-www-form-urlencoded Content-Length: 47 Ocp-Apim-Subscription-Key: 123456789ABCDE X-MSEdge-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com @@ -63,7 +63,7 @@ text=when+its+your+turn+turn,+john,+come+runing > Content-Length: 47 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > diff --git a/articles/cognitive-services/Bing-Spell-Check/quick-start.md b/articles/cognitive-services/Bing-Spell-Check/quick-start.md index f92fad6fd7f84..378bca323fb1e 100644 --- a/articles/cognitive-services/Bing-Spell-Check/quick-start.md +++ b/articles/cognitive-services/Bing-Spell-Check/quick-start.md @@ -30,8 +30,10 @@ https://api.cognitive.microsoft.com/bing/v5.0/spellcheck > https://api.cognitive.microsoft.com/bing/v7.0/spellcheck > ``` -The request must use the HTTPS protocol, and all requests must be made from a server (calls may not be made from a client). - +The request must use the HTTPS protocol. + +We recommend that all requests originate from a server. Distributing the key as part of a client application provides more opportunity for a malicious third-party to access it. Also, making calls from a server provides a single upgrade point for future versions of the API. + The request must specify the [text](https://docs.microsoft.com/rest/api/cognitiveservices/bing-spell-check-api-v5-reference#text) query parameter, which contains the text string to proof. Although optional, the request should also specify the [mkt](https://docs.microsoft.com/rest/api/cognitiveservices/bing-spell-check-api-v5-reference#mkt) query parameter, which identifies the market where you want the results to come from. For a list of optional query parameters such as `mode`, see [Query Parameters](https://docs.microsoft.com/rest/api/cognitiveservices/bing-spell-check-api-v5-reference#query-parameters). All query parameter values must be URL encoded. The request must specify the [Ocp-Apim-Subscription-Key](https://docs.microsoft.com/rest/api/cognitiveservices/bing-spell-check-api-v5-reference#subscriptionkey) header. Although optional, you are encouraged to also specify the following headers: @@ -52,7 +54,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/spellcheck?text=when+its+your+ Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -64,7 +66,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/spellcheck?text=when+its+your+turn+turn,+john,+come+runing&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` diff --git a/articles/cognitive-services/Bing-Video-Search/quick-start.md b/articles/cognitive-services/Bing-Video-Search/quick-start.md index 7696aeab4afd7..70f7f8d951dbc 100644 --- a/articles/cognitive-services/Bing-Video-Search/quick-start.md +++ b/articles/cognitive-services/Bing-Video-Search/quick-start.md @@ -30,7 +30,10 @@ https://api.cognitive.microsoft.com/bing/v5.0/videos/search > https://api.cognitive.microsoft.com/bing/v7.0/videos/search > ``` -The request must use the HTTPS protocol, and all requests must be made from a server (calls may not be made from a client). +The request must use the HTTPS protocol. + +We recommend that all requests originate from a server. Distributing the key as part of a client application provides more opportunity for a malicious third-party to access it. Also, making calls from a server provides a single upgrade point for future versions of the API. + The request must specify the [q](https://docs.microsoft.com/rest/api/cognitiveservices/bing-video-api-v5-reference#query) query parameter, which contains the user's search term. Although it's optional, the request should also specify the [mkt](https://docs.microsoft.com/rest/api/cognitiveservices/bing-video-api-v5-reference#mkt) query parameter, which identifies the market where you want the results to come from. For a list of optional query parameters such as `pricing`, see [Query Parameters](https://docs.microsoft.com/rest/api/cognitiveservices/bing-video-api-v5-reference#query-parameters). All query parameter values must be URL encoded. @@ -55,7 +58,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/videos/search?q=sailing+dinghi Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -67,7 +70,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/videos/search?q=sailing+dinghies&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` diff --git a/articles/cognitive-services/Bing-Video-Search/search-the-web.md b/articles/cognitive-services/Bing-Video-Search/search-the-web.md index 58462f67c90ed..17dace7785e67 100644 --- a/articles/cognitive-services/Bing-Video-Search/search-the-web.md +++ b/articles/cognitive-services/Bing-Video-Search/search-the-web.md @@ -25,17 +25,6 @@ If you're requesting videos from Bing, your user experience must provide a searc After the user enters their query term, you need to URL encode the term before setting the [q](https://docs.microsoft.com/rest/api/cognitiveservices/bing-video-api-v5-reference#query) query parameter. For example, if the user entered *sailing dinghies*, you would set `q` to *sailing+dinghies* or *sailing%20dinghies*. -If the query term contains a spelling mistake, the response includes a [QueryContext](https://docs.microsoft.com/rest/api/cognitiveservices/bing-video-api-v5-reference#querycontext) object. The object shows the original spelling and the corrected spelling that Bing used for the search. - -``` - "queryContext":{ - "originalQuery":"sialing dingies", - "alteredQuery":"sailing dinghies", - "alterationOverrideQuery":"+sialing dingies" - }, -``` - -You could use `originalQuery` and `alteredQuery` to let the user know the actual query term that Bing used. ## Getting videos @@ -46,7 +35,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/videos/search?q=sailing+dinghi Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -58,7 +47,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/videos/search?q=sailing+dinghies&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -168,7 +157,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/videos/search?q=sailing+dinghi Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-MSEdge-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -180,7 +169,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/videos/search?q=sailing+dinghies+site:contososailing.com&pricing=free&freshness=month&resolution=720p&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` diff --git a/articles/cognitive-services/Bing-Video-Search/trending-videos.md b/articles/cognitive-services/Bing-Video-Search/trending-videos.md index 44de4be326575..1c4c4314d3d81 100644 --- a/articles/cognitive-services/Bing-Video-Search/trending-videos.md +++ b/articles/cognitive-services/Bing-Video-Search/trending-videos.md @@ -22,7 +22,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/videos/trending?mkt=en-us HTTP Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-MSEdge-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -34,7 +34,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/videos/trending?mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` diff --git a/articles/cognitive-services/Bing-Video-Search/video-insights.md b/articles/cognitive-services/Bing-Video-Search/video-insights.md index 289cfd71e59dc..25f08623e0a27 100644 --- a/articles/cognitive-services/Bing-Video-Search/video-insights.md +++ b/articles/cognitive-services/Bing-Video-Search/video-insights.md @@ -39,7 +39,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/videos/details?q=sailiing+ding Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-MSEdge-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -53,7 +53,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/videos/details?q=sailiing+dinghies&id=6DB795E11A6E3CBAAD636DB795E11A6E3CBAAD63&modules=All&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` @@ -67,7 +67,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/videos/details?q=sailiing+ding Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -113,7 +113,7 @@ The following is the response to the previous request. The top-level object is a > GET https://api.cognitive.microsoft.com/bing/v7.0/videos/details?q=sailiing+dinghies&id=6DB795E11A6E3CBAAD636DB795E11A6E3CBAAD63&modules=RelatedVideos&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` diff --git a/articles/cognitive-services/Bing-Web-Search/bing-web-upgrade-guide-v5-to-v7.md b/articles/cognitive-services/Bing-Web-Search/bing-web-upgrade-guide-v5-to-v7.md index a17884d6ecb7a..d36668d56f880 100644 --- a/articles/cognitive-services/Bing-Web-Search/bing-web-upgrade-guide-v5-to-v7.md +++ b/articles/cognitive-services/Bing-Web-Search/bing-web-upgrade-guide-v5-to-v7.md @@ -71,6 +71,10 @@ Blocked|InvalidRequest.Blocked ## Non-breaking changes +### Headers + +- Added the optional [Pragma](https://docs.microsoft.com/rest/api/cognitiveservices/bing-web-api-v7-reference#pragma) request header. By default, Bing returns cached content, if available. To prevent Bing from returning cached content, set the Pragma header to no-cache (for example, Pragma: no-cache). + ### Query parameters - Added the [answerCount](https://docs.microsoft.com/rest/api/cognitiveservices/bing-web-api-v7-reference#answercount) query parameter. Use this parameter to specify the number of answers that you want the response to include. The answers are chosen based on ranking. For example, if you set this parameter to three (3), the response includes the top three ranked answers. diff --git a/articles/cognitive-services/Bing-Web-Search/filter-answers.md b/articles/cognitive-services/Bing-Web-Search/filter-answers.md index 4bd43391ad9de..35519fccf0d58 100644 --- a/articles/cognitive-services/Bing-Web-Search/filter-answers.md +++ b/articles/cognitive-services/Bing-Web-Search/filter-answers.md @@ -47,7 +47,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/search?q=sailing+dinghies&resp Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: 47.60357,long:-122.3295,re:100 +X-Search-Location: 47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -118,7 +118,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/search?q=sailing+dinghies&answ Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: 47.60357,long:-122.3295,re:100 +X-Search-Location: 47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -162,7 +162,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/search?q=sailing+dinghies&answ Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: 47.60357,long:-122.3295,re:100 +X-Search-Location: 47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` diff --git a/articles/cognitive-services/Bing-Web-Search/quick-start.md b/articles/cognitive-services/Bing-Web-Search/quick-start.md index 673df049258c8..258dd41224fb3 100644 --- a/articles/cognitive-services/Bing-Web-Search/quick-start.md +++ b/articles/cognitive-services/Bing-Web-Search/quick-start.md @@ -31,7 +31,9 @@ https://api.cognitive.microsoft.com/bing/v5.0/search > ``` -The request must use the HTTPS protocol, and all requests must be made from a server (calls may not be made from a client). +The request must use the HTTPS protocol. + +We recommend that all requests originate from a server. Distributing the key as part of a client application provides more opportunity for a malicious third-party to access it. Also, making calls from a server provides a single upgrade point for future versions of the API. The request must specify the [q](https://docs.microsoft.com/rest/api/cognitiveservices/bing-web-api-v5-reference#query) query parameter, which contains the user's search term. Although it's optional, the request should also specify the [mkt](https://docs.microsoft.com/rest/api/cognitiveservices/bing-web-api-v5-reference#mkt) query parameter, which identifies the market where you want the results to come from. For a list of optional query parameters such as `responseFilter` and `textDecorations`, see [Query Parameters](https://docs.microsoft.com/rest/api/cognitiveservices/bing-web-api-v5-reference#query-parameters). All query parameter values must be URL encoded. @@ -55,7 +57,7 @@ GET https://api.cognitive.microsoft.com/bing/v5.0/search?q=sailing+lessons+seatt Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 -X-Search-Location: lat:47.60357,long:-122.3295,re:100 +X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: Host: api.cognitive.microsoft.com ``` @@ -67,7 +69,7 @@ Host: api.cognitive.microsoft.com > GET https://api.cognitive.microsoft.com/bing/v7.0/search?q=sailing+lessons+seattle&mkt=en-us HTTP/1.1 > Ocp-Apim-Subscription-Key: 123456789ABCDE > X-MSEdge-ClientIP: 999.999.999.999 -> X-Search-Location: lat:47.60357,long:-122.3295,re:100 +> X-Search-Location: lat:47.60357;long:-122.3295;re:100 > X-MSEdge-ClientID: > Host: api.cognitive.microsoft.com > ``` diff --git a/articles/cognitive-services/Bing-Web-Search/rank-results.md b/articles/cognitive-services/Bing-Web-Search/rank-results.md index 28c4f52177c85..89ecfe8870915 100644 --- a/articles/cognitive-services/Bing-Web-Search/rank-results.md +++ b/articles/cognitive-services/Bing-Web-Search/rank-results.md @@ -215,4 +215,6 @@ And the sidebar would display the following search results: For information about promoting unranked results, see [Promoting answers that are not ranked](./filter-answers.md#promoting-answers-that-are-not-ranked). -For information about limiting the number of ranked answers in the response, see [Limiting the number of answers in the response](./filter-answers.md#limiting-the-number-of-answers-in-the-response). \ No newline at end of file +For information about limiting the number of ranked answers in the response, see [Limiting the number of answers in the response](./filter-answers.md#limiting-the-number-of-answers-in-the-response). + +For a C# example that uses ranking to display results, see [C# ranking tutorial](./csharp-ranking-tutorial.md). \ No newline at end of file diff --git a/articles/container-registry/container-registry-headers.md b/articles/container-registry/container-registry-headers.md index d962375133117..cfeee9143c1d8 100644 --- a/articles/container-registry/container-registry-headers.md +++ b/articles/container-registry/container-registry-headers.md @@ -45,11 +45,11 @@ The key-value pairs we are encouraging ACR partners to use are below: | App Service - Logic Apps | azure/app-service/logic-apps | | Batch | azure/compute/batch | | Cloud Console | azure/cloud-console | -| C-Series | azure/compute/c-series | | Functions | azure/compute/functions | | Internet of Things - Hub | azure/iot/hub | | HDInsight | azure/data/hdinsight | | Jenkins | azure/jenkins | +| Machine Learning | azure/data/machile-learning | | Service Fabric | azure/compute/service-fabric | | VSTS | azure/vsts | diff --git a/articles/container-service/TOC.md b/articles/container-service/TOC.md index d60059585c61d..29aa4f9d7cbf6 100644 --- a/articles/container-service/TOC.md +++ b/articles/container-service/TOC.md @@ -12,6 +12,7 @@ ## [CI/CD with Kubernetes and Jenkins](container-service-kubernetes-jenkins.md) ## [CI/CD with Docker Swarm and VSTS](container-service-docker-swarm-setup-ci-cd.md) ## [CI/CD with Docker Swarm mode and VSTS using ACS Engine](container-service-docker-swarm-mode-setup-ci-cd-acs-engine.md) +## [Use Draft with ACS and ACR](container-service-draft-up.md) # Concepts ## [Secure containers](container-service-security.md) diff --git a/articles/container-service/container-service-draft-up.md b/articles/container-service/container-service-draft-up.md new file mode 100644 index 0000000000000..927b7fee4e031 --- /dev/null +++ b/articles/container-service/container-service-draft-up.md @@ -0,0 +1,261 @@ +--- +title: Use Draft with Azure Container Service and Azure Container Registry | Microsoft Docs +description: Create an ACS Kubernetes cluster and an Azure Container Registry to create your first application in Azure with Draft. +services: container-service +documentationcenter: '' +author: squillace +manager: gamonroy +editor: '' +tags: draft, helm, acs, azure-container-service +keywords: Docker, Containers, microservices, Kubernetes, Draft, Azure + + +ms.service: container-service +ms.devlang: na +ms.topic: get-started-article +ms.tgt_pltfrm: na +ms.workload: na +ms.date: 05/31/2017 +ms.author: rasquill + + +--- + +# Use Draft with Azure Container Service and Azure Container Registry to build and deploy an application to Kubernetes + +[Draft](https://aka.ms/draft) is a new open-source tool that makes it easy to develop container-based applications and deploy them to Kubernetes clusters without knowing much about Docker and Kubernetes -- or even installing them. Using tools like Draft let you and your teams focus on building the application with Kubernetes, not paying as much attention to infrastructure. + +You can use Draft with any Docker image registry and any Kubernetes cluster, including locally. This tutorial shows how to use ACS with Kubernetes, ACR, and Azure DNS to create a live CI/CD developer pipeline using Draft. + + +## Create an Azure Container Registry +You can easily [create a new Azure Container Registry](../container-registry/container-registry-get-started-azure-cli.md), but the steps are as follows: + +1. Create a Azure resource group to manage your ACR registry and the Kubernetes cluster in ACS. + ```azurecli + az group create --name draft --location eastus + ``` + +2. Create an ACR image registry using [az acr create](/cli/azure/acr#create) + ```azurecli + az acr create -g draft -n draftacs --sku Basic --admin-enabled true -l eastus + ``` + + +## Create an Azure Container Service with Kubernetes + +Now you're ready to use [az acs create](/cli/azure/acs#create) to create an ACS cluster using Kubernetes as the `--orchestrator-type` value. +```azurecli +az acs create --resource-group draft --name draft-kube-acs --dns-prefix draft-cluster --orchestrator-type kubernetes +``` + +> [!NOTE] +> Because Kubernetes is not the default orchestrator type, be sure you use the `--orchestrator-type kubernetes` switch. + +The output when successful looks similar to the following. + +```json +waiting for AAD role to propagate.done +{ + "id": "/subscriptions//resourceGroups/draft/providers/Microsoft.Resources/deployments/azurecli14904.93snip09", + "name": "azurecli1496227204.9323909", + "properties": { + "correlationId": "", + "debugSetting": null, + "dependencies": [], + "mode": "Incremental", + "outputs": null, + "parameters": { + "clientSecret": { + "type": "SecureString" + } + }, + "parametersLink": null, + "providers": [ + { + "id": null, + "namespace": "Microsoft.ContainerService", + "registrationState": null, + "resourceTypes": [ + { + "aliases": null, + "apiVersions": null, + "locations": [ + "westus" + ], + "properties": null, + "resourceType": "containerServices" + } + ] + } + ], + "provisioningState": "Succeeded", + "template": null, + "templateLink": null, + "timestamp": "2017-05-31T10:46:29.434095+00:00" + }, + "resourceGroup": "draft" +} +``` + +Now that you have a cluster, you can import the credentials by using the [az acs kubernetes get-credentials](/cli/azure/acs/kubernetes#get-credentials) command. Now you have a local configuration file for your cluster, which is what Helm and Draft need to get their work done. + +## Install and configure draft +The installation instructions for Draft are in the [Draft repository](https://github.com/Azure/draft/blob/master/docs/install.md). They are relatively simple, but do require some configuration, as it depends on [Helm](https://aka.ms/helm) to create and deploy a Helm chart into the Kubernetes cluster. + +1. [Download and install Helm](https://aka.ms/helm#install). +2. Use Helm to search for and install `stable/traefik`, and ingress controller to enable inbound requests for your builds. + ```bash + $ helm search traefik + NAME VERSION DESCRIPTION + stable/traefik 1.2.1-a A Traefik based Kubernetes ingress controller w... + + $ helm install stable/traefik --name ingress + ``` + Now set a watch on the `ingress` controller to capture the external IP value when it is deployed. This IP address will be the one [mapped to your deployment domain](#wire-up-deployment-domain) in the next section. + + ```bash + kubectl get svc -w + NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE + ingress-traefik 10.0.248.104 13.64.108.240 80:31046/TCP,443:32556/TCP 1h + kubernetes 10.0.0.1 443/TCP 7h + ``` + + In this case, the external IP for the deployment domain is `13.64.108.240`. Now you can map your domain to that IP. + +## Wire up deployment domain + +Draft creates a release for each Helm chart it creates -- each application you are working on. Each one gets a generated name that is used by draft as a _subdomain_ on top of the root _deployment domain_ that you control. (In this example, we use `squillace.io` as the deployment domain.) To enable this subdomain behavior, you must create an A record for `'*'` in your DNS entries for your deployment domain, so that each generated subdomain is routed to the Kubernetes cluster's ingress controller. + +Your own domain provider has their own way to assign DNS servers; to [delegate your domain nameservers to Azure DNS](../dns/dns-delegate-domain-azure-dns.md), you take the following steps: + +1. Create a resource group for your zone. + ```azurecli + az group create --name squillace.io --location eastus + { + "id": "/subscriptions//resourceGroups/squillace.io", + "location": "eastus", + "managedBy": null, + "name": "zones", + "properties": { + "provisioningState": "Succeeded" + }, + "tags": null + } + ``` + +2. Create a DNS zone for your domain. +Use the [az network dns zone create](/cli/azure/network/dns/zone#create) command to obtain the nameservers to delegate DNS control to Azure DNS for your domain. + ```azurecli + az network dns zone create --resource-group squillace.io --name squillace.io + { + "etag": "", + "id": "/subscriptions//resourceGroups/zones/providers/Microsoft.Network/dnszones/squillace.io", + "location": "global", + "maxNumberOfRecordSets": 5000, + "name": "squillace.io", + "nameServers": [ + "ns1-09.azure-dns.com.", + "ns2-09.azure-dns.net.", + "ns3-09.azure-dns.org.", + "ns4-09.azure-dns.info." + ], + "numberOfRecordSets": 2, + "resourceGroup": "squillace.io", + "tags": {}, + "type": "Microsoft.Network/dnszones" + } + ``` +3. Add the DNS servers you are given to the domain provider for your deployment domain, which enables you to use Azure DNS to repoint your domain as you want. +4. Create an A record-set entry for your deployment domain mapping to the `ingress` IP from step 2 of the previous section. + ```azurecli + az network dns record-set a add-record --ipv4-address 13.64.108.240 --record-set-name '*' -g squillace.io -z squillace.io + ``` +The output looks something like: + ```json + { + "arecords": [ + { + "ipv4Address": "13.64.108.240" + } + ], + "etag": "", + "id": "/subscriptions//resourceGroups/squillace.io/providers/Microsoft.Network/dnszones/squillace.io/A/*", + "metadata": null, + "name": "*", + "resourceGroup": "squillace.io", + "ttl": 3600, + "type": "Microsoft.Network/dnszones/A" + } + ``` + +5. Configure Draft to use your registry and create subdomains for each Helm chart it creates. To configure Draft, you need: + - your Azure Container Registry name (in this example, `draftacs`) + - your registry key, or password, from `az acr credential show -n $acrname --output tsv --query "passwords[0].value"`. + - the root deployment domain that you have configured to map to the Kubernetes ingress external IP address (here, `13.64.108.240`) + + With these values you can create the base-64 encoded value of the configuration JSON string, `{"username":"","password":"","email":"email@example.com"}`. One way to encode the value is the following (but replace this example's values with your own). + ```bash + acrname="draftacs" + password=$(az acr credential show -n $acrname --output tsv --query "passwords[0].value") + authtoken=$(echo \{\"username\":\"$acrname\",\"password\":\"$password\",\"email\":\"rasquill@microsoft.com\"\} | base64) + ``` + + You can confirm that the JSON string is correct by typing `echo $authtoken | base64 -D` to display the unencoded result. + Now initialize Draft with this command and configuration argument for the `-set` option: + ```bash + draft init --set registry.url=$acrname.azurecr.io,registry.org=$acrname,registry.authtoken=$authtoken,basedomain=squillace.io + ``` + > [!NOTE] + > It's easy to forget that the `basedomain` value is the base deployment domain that you control and have configured to point at the ingress external IP. + +Now you're ready to deploy an application. + + +## Build and deploy an application + +In the Draft repo are [six simple example applications](https://github.com/Azure/draft/tree/master/examples). Clone the repo and let's use the [Python example](https://github.com/Azure/draft/tree/master/examples/python). Change into the examples/Python directory, and type `draft create` to build the application. It should look like the following example. +```bash +$ draft create +--> Python app detected +--> Ready to sail +``` + +The output includes a Dockerfile and a Helm chart. To build and deploy, you just type `draft up`. The output is extensive, but begins like the following example. +```bash +$ draft up +--> Building Dockerfile +Step 1 : FROM python:onbuild +onbuild: Pulling from library/python +10a267c67f42: Pulling fs layer +fb5937da9414: Pulling fs layer +9021b2326a1e: Pulling fs layer +dbed9b09434e: Pulling fs layer +ea8a37f15161: Pulling fs layer + +``` + +and when successful ends with something similar to the following example. +```bash +ab68189731eb: Pushed +53c0ab0341bee12d01be3d3c192fbd63562af7f1: digest: sha256:bb0450ec37acf67ed461c1512ef21f58a500ff9326ce3ec623ce1e4427df9765 size: 2841 +--> Deploying to Kubernetes +--> Status: DEPLOYED +--> Notes: + + http://gangly-bronco.squillace.io to access your application + +Watching local files for changes... +``` + +Whatever your chart's name is, you can now `curl http://gangly-bronco.squillace.io` to receive the reply, `Hello World!`. + +## Next steps + +Now that you have an ACS Kubernetes cluster, you can investigate using [Azure Container Registry](../container-registry/container-registry-intro.md) to create more and different deployments of this scenario. For example, you can create a draft._basedomain.toplevel_ domain DNS record-set that controls things off of a deeper subdomain for specific ACS deployments. + + + + + + diff --git a/articles/data-lake-analytics/data-lake-analytics-analyze-weblogs.md b/articles/data-lake-analytics/data-lake-analytics-analyze-weblogs.md index 4c247169ff6b5..122b114b94d3d 100644 --- a/articles/data-lake-analytics/data-lake-analytics-analyze-weblogs.md +++ b/articles/data-lake-analytics/data-lake-analytics-analyze-weblogs.md @@ -20,11 +20,6 @@ ms.author: edmaca # Tutorial: Analyze Website logs using Azure Data Lake Analytics Learn how to analyze website logs using Data Lake Analytics, especially on finding out which referrers ran into errors when they tried to visit the website. -> [!NOTE] -> If you just want to see the application working, it saves time to go through [Use Azure Data Lake Analytics interactive tutorials](data-lake-analytics-use-interactive-tutorials.md). This tutorial is based on the same scenario and the same code. The purpose of this tutorial is to give developers the experience of creating and running a Data Lake Analytics application from end to end. -> -> - ## Prerequisites: * **Visual Studio 2015, Visual Studio 2013 update 4, or Visual Studio 2012 with Visual C++ Installed**. * **Microsoft Azure SDK for .NET version 2.5 or above**. Install it using the [Web platform installer](http://www.microsoft.com/web/downloads/platform.aspx). @@ -37,7 +32,7 @@ Learn how to analyze website logs using Data Lake Analytics, especially on findi * [Get Started with Azure Data Lake Analytics using Azure Portal](data-lake-analytics-get-started-portal.md). * [Develop U-SQL script using Data Lake tools for Visual Studio](data-lake-analytics-data-lake-tools-get-started.md). -* **A Data Lake Analytics account.** See [Create an Azure Data Lake Analytics account](data-lake-analytics-get-started-portal.md#create-data-lake-analytics-account). +* **A Data Lake Analytics account.** See [Create an Azure Data Lake Analytics account](data-lake-analytics-get-started-portal.md). The Data Lake Tools doesn't support creating Data Lake Analytics accounts. So you have to create it using the Azure Portal, Azure PowerShell, .NET SDK or Azure CLI. * **Upload the sample data to the Data Lake Analytics account.** See [To copy sample data files](data-lake-analytics-get-started-portal.md). diff --git a/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-get-started.md b/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-get-started.md index 17a466154d7da..5fe04e6e8a255 100644 --- a/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-get-started.md +++ b/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-get-started.md @@ -33,11 +33,11 @@ U-SQL is a hyper-scalable, highly extensible language for preparing, transformin * **Data Lake Analytics account and sample data** The Data Lake Tools do not support creating Data Lake Analytics accounts. Create an account using the Azure portal, Azure PowerShell, .NET SDK or Azure CLI. -For your convenience, a PowerShell script for creating a Data Lake Analytics service and uploading the source data file can be found in [Appx-A PowerShell sample for preparing the tutorial](data-lake-analytics-data-lake-tools-get-started.md#appx-a-powershell-sample-for-preparing-the-tutorial). +For your convenience, a PowerShell script for creating a Data Lake Analytics service and uploading the source data file can be found in [Appx-A PowerShell sample for preparing the tutorial](data-lake-analytics-data-lake-tools-get-started.md). Optionally, you can go through the following two sections in [Get Started with Azure Data Lake Analytics using Azure portal](data-lake-analytics-get-started-portal.md) to create your account and upload data manually: - 1. [Create an Azure Data Lake Analytics account](data-lake-analytics-get-started-portal.md#create-data-lake-analytics-account). + 1. [Create an Azure Data Lake Analytics account](data-lake-analytics-get-started-portal.md). 2. [Upload SearchLog.tsv to the default Data Lake Storage account](data-lake-analytics-get-started-portal.md). ## Connect to Azure @@ -218,7 +218,7 @@ To see more development topics: * [Develop U-SQL user defined operators for Data Lake Analytics jobs](data-lake-analytics-u-sql-develop-user-defined-operators.md) ## Appx-A PowerShell sample for preparing the tutorial -The following PowerShell script prepares an Azure Data Lake Analytics account and the source data for you, So you can skip to [Develop U-SQL scripts](data-lake-analytics-data-lake-tools-get-started.md#develop-u-sql-scripts). +The following PowerShell script prepares an Azure Data Lake Analytics account and the source data for you, So you can skip to [Develop U-SQL scripts](data-lake-analytics-data-lake-tools-get-started.md). #region - used for creating Azure service names $nameToken = "" diff --git a/articles/data-lake-analytics/data-lake-analytics-get-started-portal.md b/articles/data-lake-analytics/data-lake-analytics-get-started-portal.md index 529ef9dd3f24d..f9629ffb5e57a 100644 --- a/articles/data-lake-analytics/data-lake-analytics-get-started-portal.md +++ b/articles/data-lake-analytics/data-lake-analytics-get-started-portal.md @@ -27,9 +27,9 @@ information about Data Lake Analytics, see [Azure Data Lake Analytics overview]( Before you begin this tutorial, you must have an **Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). -## Create Data Lake Analytics account +## Create a Data Lake Analytics account -Now, you will create a Data Lake Analytics and a Data Lake Store account simultaneously. This step is simple and only takes about 60 to finish. +Now, you will create a Data Lake Analytics and a Data Lake Store account simultaneously. This step is simple and only takes about 60 seconds to finish. 1. Sign on to the [Azure portal](https://portal.azure.com). 2. Click **New** > **Intelligence + analytics** > **Data Lake Analytics**. @@ -42,38 +42,26 @@ Now, you will create a Data Lake Analytics and a Data Lake Store account simulta 4. Optionally, select a pricing tier for your Data Lake Analytics account. 5. Click **Create**. -## Create and submit Data Lake Analytics jobs -After you have prepared the source data, you can start developing a U-SQL script. - -**To submit a job** - -1. From the Data Lake analytics account, click **New Job**. -2. Enter **Job Name**, and the following U-SQL script: +## Submit a U-SQL job +1. From the Data Lake Analytics account, click **New Job**. +2. Paste in the following U-SQL script: ``` -@searchlog = - EXTRACT UserId int, - Start DateTime, - Region string, - Query string, - Duration int?, - Urls string, - ClickedUrls string - FROM "/Samples/Data/SearchLog.tsv" - USING Extractors.Tsv(); - -OUTPUT @searchlog - TO "/Output/SearchLog-from-Data-Lake.csv" +@a = + SELECT * FROM + (VALUES + ("Contoso", 1500.0), + ("Woodgrove", 2700.0) + ) AS + D( customer, amount ); +OUTPUT @a + TO "/data.csv" USING Outputters.Csv(); ``` - - -This U-SQL script reads the source data file using **Extractors.Tsv()**, and then creates a csv file using **Outputters.Csv()**. - -1. Click **Submit Job**. -2. Wait until the job status is changed to **Succeeded**. -3. If job failed, see [Monitor and troubleshoot Data Lake Analytics jobs](data-lake-analytics-monitor-and-troubleshoot-jobs-tutorial.md). -4. Click the **Output** tab, and then click `SearchLog-from-Data-Lake.csv`. +3. Click **Submit Job**. +4. Wait until the job status is changed to **Succeeded**. +5. If job failed, see [Monitor and troubleshoot Data Lake Analytics jobs](data-lake-analytics-monitor-and-troubleshoot-jobs-tutorial.md). +6. Click the **Output** tab, and then click `SearchLog-from-Data-Lake.csv`. ## See also diff --git a/articles/data-lake-analytics/data-lake-analytics-get-started-powershell.md b/articles/data-lake-analytics/data-lake-analytics-get-started-powershell.md index 45915024fcbf8..4f4f394257499 100644 --- a/articles/data-lake-analytics/data-lake-analytics-get-started-powershell.md +++ b/articles/data-lake-analytics/data-lake-analytics-get-started-powershell.md @@ -23,48 +23,23 @@ ms.author: edmaca Learn how to use Azure PowerShell to create Azure Data Lake Analytics accounts and then submit and run U-SQL jobs. For more information about Data Lake Analytics, see [Azure Data Lake Analytics overview](data-lake-analytics-overview.md). ## Prerequisites + Before you begin this tutorial, you must have the following information: -* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). +* **An Azure Data Lake Analytics account**. See [Get started with Data Lake Analytics](https://docs.microsoft.com/en-us/azure/data-lake-analytics/data-lake-analytics-get-started-portal). * **A workstation with Azure PowerShell**. See [How to install and configure Azure PowerShell](/powershell/azure/overview). ## Preparing for the tutorial -To create a Data Lake Analytics account, you first need to define: - -* **Azure Resource Group**: A Data Lake Analytics account must be created within an Azure Resource group. -* **Data Lake Analytics account name**: The Data Lake account name must only contain lowercase letters and numbers. -* **Location**: one of the Azure data centers that supports Data Lake Analytics. -* **Default Data Lake Store account**: each Data Lake Analytics account has a default Data Lake Store account. These accounts must be in the same location. The PowerShell snippets in this tutorial use these variables to store this information ``` $rg = "" -$adls = "" +$adls = "" $adla = "" $location = "East US 2" ``` -## Create a Data Lake Analytics account - -If you don't already have a Resource Group to use, create one. - -``` -New-AzureRmResourceGroup -Name $rg -Location $location -``` - -Every Data Lake Analytics account requires a default Data Lake Store account that it uses for storing logs. You can reuse an existing account or create a new account. - -``` -New-AdlStore -ResourceGroupName $rg -Name $adls -Location $location -``` - -Once a Resource Group and Data Lake Store account is available, create a Data Lake Analytics account. - -``` -New-AdlAnalyticsAccount -ResourceGroupName $rg -Name $adla -Location $location -DefaultDataLake $adls -``` - ## Get information about a Data Lake Analytics account ``` @@ -134,7 +109,6 @@ Download the output of the U-SQL script. Export-AdlStoreItem -AccountName $adls -Path "/data.csv" -Destination "D:\data.csv" ``` - Upload a file to be used as an unput to a U-SQL script. ``` diff --git a/articles/data-lake-analytics/data-lake-analytics-manage-use-powershell.md b/articles/data-lake-analytics/data-lake-analytics-manage-use-powershell.md index 0842174bfec5e..fe423ce81c0c3 100644 --- a/articles/data-lake-analytics/data-lake-analytics-manage-use-powershell.md +++ b/articles/data-lake-analytics/data-lake-analytics-manage-use-powershell.md @@ -22,14 +22,14 @@ ms.author: edmaca Learn how to manage Azure Data Lake Analytics accounts, data sources, users, and jobs using the Azure PowerShell. To see management topics using other tools, click the tab select above. -**Prerequisites** +## Prerequisites To create a Data Lake Analytics account, you first need to define: * **Azure Resource Group**: A Data Lake Analytics account must be created within an Azure Resource group. * **Data Lake Analytics account name**: The Data Lake account name must only contain lowercase letters and numbers. -* **Location**: one of the Azure data centers that supports Data Lake Analytics. -* **Default Data Lake Store account**: each Data Lake Analytics account has a default Data Lake Store account. These accounts must be in the same location. +* **Location**: One of the Azure data centers that supports Data Lake Analytics. +* **Default Data Lake Store account**: Each Data Lake Analytics account has a default Data Lake Store account. These accounts must be in the same location. The PowerShell snippets in this tutorial use these variables to store this information @@ -90,7 +90,7 @@ Submit the script. Submit-AdlJob -AccountName $adla –ScriptPath "d:\test.usql"Submit ``` -## Monitor U-SQL Jobs +## Monitor U-SQL jobs List all the jobs in the account. The output includes the currently running jobs and those jobs that have recently completed. @@ -128,7 +128,7 @@ Test-AdlStoreItem -Account $adls -Path "/data.csv" Stop-AdlJob -Account $dataLakeAnalyticAccountName -JobID $jobID ``` -## Upload and Download files +## Upload and download files Download the output of the U-SQL script. @@ -227,38 +227,25 @@ created an Analytics account, you can add additional Data Lake Storage accounts ## Manage catalog items The U-SQL catalog is used to structure data and code so they can be shared by U-SQL scripts. The catalog enables the highest performance possible with data in Azure Data Lake. For more information, see [Use U-SQL catalog](data-lake-analytics-use-u-sql-catalog.md). -### List catalog items - #List databases - Get-AdlCatalogItem ` - -Account $adlAnalyticsAccountName ` - -ItemType Database - - - - #List tables - Get-AdlCatalogItem ` - -Account $adlAnalyticsAccountName ` - -ItemType Table ` - -Path "master.dbo" - -### Get catalog item details - #Get a database - Get-AdlCatalogItem ` - -Account $adlAnalyticsAccountName ` - -ItemType Database ` - -Path "master" - - #Get a table - Get-AdlCatalogItem ` - -Account $adlAnalyticsAccountName ` - -ItemType Table ` - -Path "master.dbo.mytable" - -### Test existence of catalog item - Test-AdlCatalogItem ` - -Account $adlAnalyticsAccountName ` - -ItemType Database ` - -Path "master" +### List databases + + Get-AdlCatalogItem -Account $adla -ItemType Database + +### List tables in a schema + + Get-AdlCatalogItem -Account $adla -ItemType Table -Path "master.dbo" + +### Get details of a database + + Get-AdlCatalogItem -Account $adla -ItemType Database -Path "master" + +### Get details of a table in a database + + Get-AdlCatalogItem -Account $adla -ItemType Table -Path "master.dbo.mytable" + +### Test existence of a database + + Test-AdlCatalogItem -Account $adla -ItemType Database -Path "master" ## See also * [Overview of Microsoft Azure Data Lake Analytics](data-lake-analytics-overview.md) diff --git a/articles/data-lake-analytics/data-lake-analytics-monitor-and-troubleshoot-jobs-tutorial.md b/articles/data-lake-analytics/data-lake-analytics-monitor-and-troubleshoot-jobs-tutorial.md index 526a17b253a09..3d85bb87b88e2 100644 --- a/articles/data-lake-analytics/data-lake-analytics-monitor-and-troubleshoot-jobs-tutorial.md +++ b/articles/data-lake-analytics/data-lake-analytics-monitor-and-troubleshoot-jobs-tutorial.md @@ -27,7 +27,7 @@ In this tutorial, you will setup a missing source file problem, and use the Azur Before you begin this tutorial, you must have the following: * **Basic knowledge of Data Lake Analytics job process**. See [Get started with Azure Data Lake Analytics using Azure Portal](data-lake-analytics-get-started-portal.md). -* **A Data Lake Analytics account**. See [Get started with Azure Data Lake Analytics using Azure Portal](data-lake-analytics-get-started-portal.md#create-data-lake-analytics-account). +* **A Data Lake Analytics account**. See [Get started with Azure Data Lake Analytics using Azure Portal](data-lake-analytics-get-started-portal.md). * **Copy the sample data to the default Data Lake Store account**. See [Prepare source data](data-lake-analytics-get-started-portal.md) ## Submit a Data Lake Analytics job @@ -38,7 +38,7 @@ Now you will create a U-SQL job with a bad source file name. 1. From the Azure Portal, click **Microsoft Azure** in the upper left corner. 2. Click the tile with your Data Lake Analytics account name. It was pinned here when the account was created. If the account is not pinned there, see - [Open an Analytics account from portal](data-lake-analytics-manage-use-portal.md#manage-data-sources). + [Open an Analytics account from portal](data-lake-analytics-manage-use-portal.md). 3. Click **New Job** from the top menu. 4. Enter a Job name, and the following U-SQL script: diff --git a/articles/data-lake-analytics/data-lake-analytics-overview.md b/articles/data-lake-analytics/data-lake-analytics-overview.md index 8e2075631a8dd..f7f353f372e03 100644 --- a/articles/data-lake-analytics/data-lake-analytics-overview.md +++ b/articles/data-lake-analytics/data-lake-analytics-overview.md @@ -49,8 +49,7 @@ Azure Data Lake Analytics is an on-demand analytics job service to simplify big * Management - * [Manage Azure Data Lake Analytics using Azure portal](data-lake-analytics-manage-use-portal.md) - * [Manage Azure Data Lake Analytics using Azure PowerShell](data-lake-analytics-manage-use-powershell.md) + * Manage Azure Data Lake Analytics using [Azure portal](data-lake-analytics-manage-use-portal.md) | [Azure PowerShell](data-lake-analytics-manage-use-powershell.md) * [Monitor and troubleshoot Azure Data Lake Analytics jobs using Azure portal](data-lake-analytics-monitor-and-troubleshoot-jobs-tutorial.md) * Let us know what you think diff --git a/articles/data-lake-analytics/data-lake-analytics-use-u-sql-catalog.md b/articles/data-lake-analytics/data-lake-analytics-use-u-sql-catalog.md index 8662cff4b7e7a..abd68d7485dfa 100644 --- a/articles/data-lake-analytics/data-lake-analytics-use-u-sql-catalog.md +++ b/articles/data-lake-analytics/data-lake-analytics-use-u-sql-catalog.md @@ -73,7 +73,3 @@ You can use Data Lake Tools for Visual Studio to manage the catalog. For more i * [Manage Azure Data Lake Analytics using Azure portal](data-lake-analytics-manage-use-portal.md) * [Manage Azure Data Lake Analytics using Azure PowerShell](data-lake-analytics-manage-use-powershell.md) * [Monitor and troubleshoot Azure Data Lake Analytics jobs using Azure portal](data-lake-analytics-monitor-and-troubleshoot-jobs-tutorial.md) -* End-to-end tutorial - - * [Use Azure Data Lake Analytics interactive tutorials](data-lake-analytics-use-interactive-tutorials.md) - * [Analyze Website logs using Azure Data Lake Analytics](data-lake-analytics-analyze-weblogs.md) diff --git a/articles/data-lake-analytics/data-lake-analytics-use-window-functions.md b/articles/data-lake-analytics/data-lake-analytics-use-window-functions.md index 2e01d78534f49..501667a12844d 100644 --- a/articles/data-lake-analytics/data-lake-analytics-use-window-functions.md +++ b/articles/data-lake-analytics/data-lake-analytics-use-window-functions.md @@ -653,7 +653,7 @@ PERCENTILE_DISC does not interpolate values, so the median for Web is 200 - whic ## See also * [Develop U-SQL scripts using Data Lake Tools for Visual Studio](data-lake-analytics-data-lake-tools-get-started.md) -* [Use Azure Data Lake Analytics interactive tutorials](data-lake-analytics-use-interactive-tutorials.md) +* [Learn about the U-SQL language](http://usql.io) * [Get started with Azure Data Lake Analytics U-SQL language](data-lake-analytics-u-sql-get-started.md) diff --git a/articles/event-hubs/event-hubs-dedicated-overview.md b/articles/event-hubs/event-hubs-dedicated-overview.md index e0280507b84bd..282b0314ff896 100644 --- a/articles/event-hubs/event-hubs-dedicated-overview.md +++ b/articles/event-hubs/event-hubs-dedicated-overview.md @@ -13,7 +13,7 @@ ms.workload: na ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 02/21/2017 +ms.date: 06/01/2017 ms.author: sethm;babanisa --- @@ -51,13 +51,13 @@ The following benefits are available when using Event Hubs Dedicated: * Zero maintenance: we manage load balancing, OS updates, security patches, and partitioning. * Fixed monthly pricing. -Event Hubs Dedicated also removes some of the throughput limitations of the Standard offering. Throughput units in Basic and Standard tiers entitle you to 1000 events per second or 1 MBps of ingress per TU and double that amount of egress. The Dedicated scale offering has no restrictions on ingress and egress event counts. These limits are governed only by the processing capacity of the purchased event hubs. +Event Hubs Dedicated also removes some of the throughput limitations of the Standard offering. Throughput units in Basic and Standard tiers entitle you to 1000 events per second or 1 MB per second of ingress per TU and double that amount of egress. The Dedicated scale offering has no restrictions on ingress and egress event counts. These limits are governed only by the processing capacity of the purchased event hubs. This service is targeted at the largest telemetry users and is available to customers with an enterprise agreement. ## How to onboard -The Event Hubs Dedicated platform is offered to the public through an enterprise agreement with varying sizes of CUs. Each CU provides approximately the equivalent of 200 throughput units. You can scale your capacity up or down throughout the month to meet your needs by adding or removing CUs. The dedicated plan is unique in that you will experience a more hands-on onboarding from the Event Hubs product team to get the flexible deployment that is right for you. +The Event Hubs Dedicated platform is offered through an enterprise agreement with varying sizes of CUs. Each CU provides approximately the equivalent of 200 throughput units. You can scale your capacity up or down throughout the month to meet your needs by adding or removing CUs. The Dedicated plan is unique in that you will experience a more hands-on onboarding from the Event Hubs product team to get the flexible deployment that is right for you. ## Next steps Contact your Microsoft sales representative or Microsoft Support to get additional details about Event Hubs Dedicated Capacity. You can also learn more about Event Hubs by visiting the following links: diff --git a/articles/hdinsight/hdinsight-hadoop-port-settings-for-services.md b/articles/hdinsight/hdinsight-hadoop-port-settings-for-services.md index 87e0ba2534cfc..5c52d520765de 100644 --- a/articles/hdinsight/hdinsight-hadoop-port-settings-for-services.md +++ b/articles/hdinsight/hdinsight-hadoop-port-settings-for-services.md @@ -14,11 +14,11 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: big-data -ms.date: 05/17/2017 +ms.date: 06/02/2017 ms.author: larryfr --- -# Ports and URIs used by HDInsight +# Ports used by Hadoop services on HDInsight This document provides a list of the ports used by Hadoop services running on Linux-based HDInsight clusters. It also provides information on ports used to connect to the cluster using SSH. @@ -100,8 +100,8 @@ All services publicly exposed on the internet must be authenticated: | Service | Nodes | Port | Protocol | Description | | --- | --- | --- | --- | --- | -| HiveServer2 |Head nodes |10001 |Thrift |Service for programmatically connecting to Hive (Thrift/JDBC) | -| Hive Metastore |Head nodes |9083 |Thrift |Service for programmatically connecting to Hive metadata (Thrift/JDBC) | +| HiveServer2 |Head nodes |10001 |Thrift |Service for connecting to Hive (Thrift/JDBC) | +| Hive Metastore |Head nodes |9083 |Thrift |Service for connecting to Hive metadata (Thrift/JDBC) | ### WebHCat ports @@ -147,3 +147,8 @@ All services publicly exposed on the internet must be authenticated: | Broker |Worker nodes |9092 |[Kafka Wire Protocol](http://kafka.apache.org/protocol.html) |Used for client communication | |   |Zookeeper nodes |2181 |  |The port that clients use to connect to Zookeeper | +### Spark ports + +| Service | Nodes | Port | Protocol | Description | +| --- | --- | --- | --- | --- | +| Spark Thrift servers |Head nodes |10002 |Thrift |Service for connecting to Spark SQL (Thrift/JDBC) | \ No newline at end of file diff --git a/articles/hdinsight/hdinsight-hadoop-use-hive-beeline.md b/articles/hdinsight/hdinsight-hadoop-use-hive-beeline.md index 6c2dff11eba1d..05294100c2b00 100644 --- a/articles/hdinsight/hdinsight-hadoop-use-hive-beeline.md +++ b/articles/hdinsight/hdinsight-hadoop-use-hive-beeline.md @@ -28,7 +28,7 @@ Beeline is a Hive client that is included on the head nodes of your HDInsight cl | Where you run Beeline from | Parameters | | --- | --- | --- | -| An SSH connection to a headnode or edge node | `-u 'jdbc:hive2://headnodehost:10001/;transportMode=http' -n admin` | +| An SSH connection to a headnode or edge node | `-u 'jdbc:hive2://headnodehost:10001/;transportMode=http'` | | Outside the cluster | `-u 'jdbc:hive2://clustername.azurehdinsight.net:443/;ssl=true;transportMode=http;httpPath=/hive2' -n admin -p password` | > [!NOTE] @@ -51,22 +51,20 @@ Beeline is a Hive client that is included on the head nodes of your HDInsight cl ## Use Beeline -1. When starting Beeline, you must provide a connection string for HiveServer2 on your HDInsight cluster. You must also provide the account name for the cluster login (usually `admin`). If you run the command from outside the cluster, you must also provide the cluster login password. Use the following table to find the connection string format and parameters to use: +1. When starting Beeline, you must provide a connection string for HiveServer2 on your HDInsight cluster. To run the command from outside the cluster, you must also provide the cluster login account name (default `admin`) and password. Use the following table to find the connection string format and parameters to use: | Where you run Beeline from | Parameters | | --- | --- | --- | - | An SSH connection to a headnode or edge node | `-u 'jdbc:hive2://headnodehost:10001/;transportMode=http' -n admin` | + | An SSH connection to a headnode or edge node | `-u 'jdbc:hive2://headnodehost:10001/;transportMode=http'` | | Outside the cluster | `-u 'jdbc:hive2://clustername.azurehdinsight.net:443/;ssl=true;transportMode=http;httpPath=/hive2' -n admin -p password` | For example, the following command can be used to start Beeline from an SSH session to the cluster: ```bash - beeline -u 'jdbc:hive2://headnodehost:10001/;transportMode=http' -n admin + beeline -u 'jdbc:hive2://headnodehost:10001/;transportMode=http' ``` - This command starts the Beeline client, and connects to HiveServer2 on the cluster head node. The `-n` parameter is used to provide the cluster login account. The default login is `admin`. If you used a different name during cluster creation, use it instead of `admin`. - - Once the command completes, you arrive at a `jdbc:hive2://headnodehost:10001/>` prompt. + This command starts the Beeline client, and connects to HiveServer2 on the cluster head node. Once the command completes, you arrive at a `jdbc:hive2://headnodehost:10001/>` prompt. 2. Beeline commands begin with a `!` character, for example `!help` displays help. However the `!` can be omitted for some commands. For example, `help` also works. @@ -188,10 +186,10 @@ Use the following steps to create a file, then run it using Beeline. 3. To save the file, use **Ctrl**+**_X**, then enter **Y**, and finally **Enter**. -4. Use the following to run the file using Beeline. Replace **HOSTNAME** with the name obtained earlier for the head node, and **PASSWORD** with the password for the admin account: +4. Use the following to run the file using Beeline: ```bash - beeline -u 'jdbc:hive2://headnodehost:10001/;transportMode=http' -n admin -i query.hql + beeline -u 'jdbc:hive2://headnodehost:10001/;transportMode=http' -i query.hql ``` > [!NOTE] @@ -228,6 +226,15 @@ Replace the `clustername` in the connection string with the name of your HDInsig Replace `admin` with the name of your cluster login, and replace `password` with the password for your cluster login. +## Use Beeline with Spark + +Spark provides its own implementation of HiveServer2, which is often refered to as the Spark Thrift server. This service uses Spark SQL to resolve queries instead of Hive, and may provide better performance depending on your query. + +To connect to the Spark Thrift server of a Spark on HDInsight cluster, use port `10002` instead of `10001`. For example, `beeline -u 'jdbc:hive2://headnodehost:10002/;transportMode=http'`. + +> [!IMPORTANT] +> The Spark Thrift server is not directly accessible over the internet. You can only connect to it from an SSH session or inside the same Azure Virtual Network as the HDInsight cluster. + ## Next steps For more general information on Hive in HDInsight, see the following document: diff --git a/articles/iot-hub/TOC.md b/articles/iot-hub/TOC.md index cfd1eb3c624a2..684aa4c4ea830 100644 --- a/articles/iot-hub/TOC.md +++ b/articles/iot-hub/TOC.md @@ -6,13 +6,13 @@ # [Get Started](iot-hub-get-started.md) ## Setup your device -### Use a simulated device +### [Use a simulated device](iot-hub-get-started-simulated.md) #### [.NET](iot-hub-csharp-csharp-getstarted.md) #### [Java](iot-hub-java-java-getstarted.md) #### [Node.js](iot-hub-node-node-getstarted.md) #### [Python](iot-hub-python-getstarted.md) -### Use a physical device +### [Use a physical device](iot-hub-get-started-physical.md) #### [Raspberry Pi with Node.js](iot-hub-raspberry-pi-kit-node-get-started.md) #### [Raspberry Pi with C](iot-hub-raspberry-pi-kit-c-get-started.md) @@ -30,13 +30,14 @@ ### [Use an online device simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) -## [Manage cloud device messaging with iothub-explorer](iot-hub-explorer-cloud-device-messaging.md) -## [Save IoT Hub messages to Azure data storage](iot-hub-store-data-in-azure-table-storage.md) -## [Data Visualization in Power BI](iot-hub-live-data-visualization-in-power-bi.md) -## [Data Visualization with Web Apps](iot-hub-live-data-visualization-in-web-apps.md) -## [Weather forecast using Azure Machine Learning](iot-hub-weather-forecast-machine-learning.md) -## [Device management with iothub-explorer](iot-hub-device-management-iothub-explorer.md) -## [Remote monitoring and notifications with ​Logic ​Apps](iot-hub-monitoring-notifications-with-azure-logic-apps.md) +## Extended IoT scenarios +### [Manage cloud device messaging with iothub-explorer](iot-hub-explorer-cloud-device-messaging.md) +### [Save IoT Hub messages to Azure data storage](iot-hub-store-data-in-azure-table-storage.md) +### [Data Visualization in Power BI](iot-hub-live-data-visualization-in-power-bi.md) +### [Data Visualization with Web Apps](iot-hub-live-data-visualization-in-web-apps.md) +### [Weather forecast using Azure Machine Learning](iot-hub-weather-forecast-machine-learning.md) +### [Device management with iothub-explorer](iot-hub-device-management-iothub-explorer.md) +### [Remote monitoring and notifications with ​Logic ​Apps](iot-hub-monitoring-notifications-with-azure-logic-apps.md) # How To ## Plan @@ -54,13 +55,13 @@ ##### [Use custom endpoints and routing rules for device-to-cloud messages](iot-hub-devguide-messages-read-custom.md) ##### [Send cloud-to-device messages from IoT Hub](iot-hub-devguide-messages-c2d.md) ##### [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md) +##### [Choose a communication protocol](iot-hub-devguide-protocols.md) #### [Upload files from a device](iot-hub-devguide-file-upload.md) #### [Manage device identities](iot-hub-devguide-identity-registry.md) #### [Control access to IoT Hub](iot-hub-devguide-security.md) #### [Understand device twins](iot-hub-devguide-device-twins.md) #### [Invoke direct methods on a device](iot-hub-devguide-direct-methods.md) #### [Schedule jobs on multiple devices](iot-hub-devguide-jobs.md) -#### [Choose a communication protocol](iot-hub-devguide-protocols.md) #### [IoT Hub endpoints](iot-hub-devguide-endpoints.md) #### [Query language](iot-hub-devguide-query-language.md) #### [Quotas and throttling](iot-hub-devguide-quotas-throttling.md) @@ -132,7 +133,7 @@ ### [Use a real device](iot-hub-iot-edge-physical-device.md) # Reference -## [Azure CLI 2.0](/cli/azure/iot) +## [Azure CLI](/cli/azure/iot) ## [.NET (Service)](/dotnet/api/microsoft.azure.devices) ## [.NET (Devices)](/dotnet/api/microsoft.azure.devices.client) ## [Java (Service)](/java/api/com.microsoft.azure.sdk.iot.service) diff --git a/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md b/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md index ceb7d835c58e4..f46e7fb84824b 100644 --- a/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md +++ b/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md @@ -18,7 +18,7 @@ ms.author: dobett --- # Read device-to-cloud messages from the built-in endpoint -By default, messages are routed to the built-in service-facing endpoint (**messages/events**), that is compatible with [Event Hubs][lnk-event-hubs]. An IoT hub exposes the following properties to enable you to control the built-in Event Hub-compatible messaging endpoint **messages/events**. +By default, messages are routed to the built-in service-facing endpoint (**messages/events**), that is compatible with [Event Hubs][lnk-event-hubs]. This endpoint is currently only exposed using the [AMQP][lnk-amqp] protocol on port 5671. An IoT hub exposes the following properties to enable you to control the built-in Event Hub-compatible messaging endpoint **messages/events**. | Property | Description | | ------------------- | ----------- | @@ -80,3 +80,4 @@ If you want to route your device-to-cloud messages to custom endpoints, see [Use [lnk-event-hub-partitions]: ../event-hubs/event-hubs-features.md#partitions [lnk-servicebus-sdk]: https://www.nuget.org/packages/WindowsAzure.ServiceBus [lnk-eventprocessorhost]: http://blogs.msdn.com/b/servicebus/archive/2015/01/16/event-processor-host-best-practices-part-1.aspx +[lnk-amqp]: https://www.amqp.org/ diff --git a/articles/iot-hub/iot-hub-get-started-physical.md b/articles/iot-hub/iot-hub-get-started-physical.md new file mode 100644 index 0000000000000..5c90440bb9d66 --- /dev/null +++ b/articles/iot-hub/iot-hub-get-started-physical.md @@ -0,0 +1,48 @@ +--- +title: 'Get started connecting physical devices to Azure IoT Hub | Microsoft Docs' +description: 'Learn how to create physical IoT devices and connect them to Azure IoT Hub. Your devices can send telemtry to IoT Hub and Iot Hub can monitor and manage your devices.' +services: iot-hub +documentationcenter: '' +author: dominicbetts +manager: timlt +editor: '' +keywords: 'azure iot hub tutorial' + +ms.service: iot-hub +ms.devlang: na +ms.topic: article +ms.tgt_pltfrm: na +ms.workload: na +ms.date: 06/02/2017 +ms.author: dobett + +--- +# Azure IoT Hub get started with physical devices tutorials + +These tutorials introduce you to Azure IoT Hub and the device SDKs. The tutorials cover common IoT scenarios to demonstrate the capabilities of IoT Hub. The tutorials also illustrate how to combine IoT Hub with other Azure services and tools to build more powerful IoT solutions. The tutorials listed in the following table show you how to create physical IoT devices. + +| IoT device | Programming language | +|---------------------------------|----------------------| +| Raspberry Pi | [Node.js][Pi_Nd], [C][Pi_C] | +| Intel Edison | [Node.js][Ed_Nd], [C][Ed_C] | +| Adafruit Feather HUZZAH ESP8266 | [Arduino][Hu_Ard] | +| Sparkfun ESP8266 Thing Dev | [Arduino][Th_Ard] | +| Adafruit Feather M0 | [Arduino][M0_Ard] | + +In addition, you can use an IoT Edge gateway to enable devices to connect to your IoT hub. + +| Gateway device | Programming language | Platform | +|------------------------------|----------------------|------------------| +| Intel NUC (model DE3815TYKE) | C | [Wind River Linux][NUC_Lnx] | + +[!INCLUDE [iot-hub-get-started-extended](../../includes/iot-hub-get-started-extended.md)] + + +[Pi_Nd]: iot-hub-raspberry-pi-kit-node-get-started.md +[Pi_C]: iot-hub-raspberry-pi-kit-c-get-started.md +[Ed_Nd]: iot-hub-intel-edison-kit-node-get-started.md +[Ed_C]: iot-hub-intel-edison-kit-c-get-started.md +[Hu_Ard]: iot-hub-arduino-huzzah-esp8266-get-started.md +[Th_Ard]: iot-hub-sparkfun-esp8266-thing-dev-get-started.md +[M0_Ard]: iot-hub-adafruit-feather-m0-wifi-kit-arduino-get-started.md +[NUC_Lnx]: iot-hub-gateway-kit-c-lesson1-set-up-nuc.md diff --git a/articles/iot-hub/iot-hub-get-started-simulated.md b/articles/iot-hub/iot-hub-get-started-simulated.md new file mode 100644 index 0000000000000..43f02213617eb --- /dev/null +++ b/articles/iot-hub/iot-hub-get-started-simulated.md @@ -0,0 +1,45 @@ +--- +title: 'Get started connecting simulated devices to Azure IoT Hub | Microsoft Docs' +description: 'Learn how to create simulated IoT devices and connect them to Azure IoT Hub. Your devices can send telemtry to IoT Hub and Iot Hub can monitor and manage your devices.' +services: iot-hub +documentationcenter: '' +author: dominicbetts +manager: timlt +editor: '' +keywords: 'azure iot hub tutorial' + +ms.service: iot-hub +ms.devlang: na +ms.topic: article +ms.tgt_pltfrm: na +ms.workload: na +ms.date: 06/02/2017 +ms.author: dobett + +--- +# Azure IoT Hub get started with simulated devices tutorials + +These tutorials introduce you to Azure IoT Hub and the device SDKs. The tutorials cover common IoT scenarios to demonstrate the capabilities of IoT Hub. The tutorials also illustrate how to combine IoT Hub with other Azure services and tools to build more powerful IoT solutions. The tutorials listed in the following table show you how to create simulated IoT devices. + +| Programming language | +|----------------------| +| [.NET][Sim_NET] | +| [Java][Sim_Jav] | +| [Node.js][Sim_Nd] | +| [Python][Sim_Pyth] S | + +In addition, you can use an IoT Edge gateway to enable simulated devices to connect to your IoT hub. + +| Programming language | Platform | +|----------------------|------------------- | +| C | [Linux][Sim_Lnx] | +| C | [Windows][Sim_Win] | + +[!INCLUDE [iot-hub-get-started-extended](../../includes/iot-hub-get-started-extended.md)] + +[Sim_NET]: iot-hub-csharp-csharp-getstarted.md +[Sim_Jav]: iot-hub-java-java-getstarted.md +[Sim_Nd]: iot-hub-node-node-getstarted.md +[Sim_Pyth]: iot-hub-python-getstarted.md +[Sim_Lnx]: iot-hub-linux-iot-edge-get-started.md +[Sim_Win]: iot-hub-windows-iot-edge-get-started.md diff --git a/articles/iot-hub/iot-hub-get-started.md b/articles/iot-hub/iot-hub-get-started.md index 01b8dfc0d3584..2e0f270e74961 100644 --- a/articles/iot-hub/iot-hub-get-started.md +++ b/articles/iot-hub/iot-hub-get-started.md @@ -29,7 +29,7 @@ You can use Azure IoT Hub and the Azure IoT device SDKs to build Internet of Thi These tutorials introduce you to Azure IoT Hub and the device SDKs. The tutorials cover common IoT scenarios to demonstrate the capabilities of IoT Hub. The tutorials also illustrate how to combine IoT Hub with other Azure services and tools to build more powerful IoT solutions. In the tutorials you can choose to use either simulated or real IoT devices. In addition, you can learn how to use a gateway to enable devices to connect to your IoT hub. -## Device setup scenario: Connect IoT device or gateway to Azure IoT Hub +## Setup your device: Connect IoT device or gateway to Azure IoT Hub You can choose your real or simulated device to get started. @@ -49,22 +49,7 @@ In addition, you can use an IoT Edge gateway to enable devices to connect to you | Intel NUC (model DE3815TYKE) | C | [Wind River Linux][NUC_Lnx] | | Simulated gateway | C | [Linux][Sim_Lnx], [Windows][Sim_Win] | -## Extended IoT scenarios: Use other Azure services and tools - -When you have connected your device to IoT Hub, you can explore additional scenarios that use other Azure tools and services: - -| Scenario | Azure service or tool | -|---------------------------------------------|------------------------------------| -| [Manage IoT Hub messages][Mg_IoT_Hub_Msg] | iothub-explorer tool | -| [Manage your IoT device][Mg_IoT_Dv] | iothub-explorer tool | -| [Save IoT Hub messages to Azure storage][Sv_IoT_Msg_Stor] | Azure table storage | -| [Visualize sensor data][Vis_Data] | Microsoft Power BI, Azure Web Apps | -| [Forecast weather with sensor data][Weather_Forecast] | Azure Machine Learning | -| [Automatic anomaly detection and reaction][Anomaly_Detect] | Azure Logic Apps | - -## Next steps - -When you have completed these tutorials, you can further explore the capabilities of IoT Hub in the [Developer guide][lnk-dev-guide]. You can find additional tutorials in the [How To][lnk-how-to] section. +[!INCLUDE [iot-hub-get-started-extended](../../includes/iot-hub-get-started-extended.md)] [Pi_Nd]: iot-hub-raspberry-pi-kit-node-get-started.md @@ -81,11 +66,3 @@ When you have completed these tutorials, you can further explore the capabilitie [NUC_Lnx]: iot-hub-gateway-kit-c-lesson1-set-up-nuc.md [Sim_Lnx]: iot-hub-linux-iot-edge-get-started.md [Sim_Win]: iot-hub-windows-iot-edge-get-started.md -[Mg_IoT_Hub_Msg]: iot-hub-explorer-cloud-device-messaging.md -[Mg_IoT_Dv]: iot-hub-device-management-iothub-explorer.md -[Sv_IoT_Msg_Stor]: iot-hub-store-data-in-azure-table-storage.md -[Vis_Data]: iot-hub-live-data-visualization-in-power-bi.md -[Weather_Forecast]: iot-hub-weather-forecast-machine-learning.md -[Anomaly_Detect]: iot-hub-monitoring-notifications-with-azure-logic-apps.md -[lnk-dev-guide]: iot-hub-devguide.md -[lnk-how-to]: iot-hub-how-to.md \ No newline at end of file diff --git a/articles/iot-hub/iot-hub-raspberry-pi-kit-node-get-started.experimental.md b/articles/iot-hub/iot-hub-raspberry-pi-kit-node-get-started.experimental.md index e88e2ecfab047..19fdfcdb691e0 100644 --- a/articles/iot-hub/iot-hub-raspberry-pi-kit-node-get-started.experimental.md +++ b/articles/iot-hub/iot-hub-raspberry-pi-kit-node-get-started.experimental.md @@ -7,7 +7,9 @@ author: shizn manager: timtl tags: '' keywords: 'azure iot raspberry pi, raspberry pi iot hub, raspberry pi send data to cloud, raspberry pi to cloud' -experiment_id: "xshi-happypathemu-20161202" + +ROBOTS: NOINDEX +redirect_url: /azure/iot-hub/iot-hub-raspberry-pi-kit-node-get-started ms.assetid: b0e14bfa-8e64-440a-a6ec-e507ca0f76ba ms.service: iot-hub diff --git a/articles/iot-hub/iot-hub-raspberry-pi-kit-node-get-started.md b/articles/iot-hub/iot-hub-raspberry-pi-kit-node-get-started.md index 39ec013ae783e..ad8607e5e7918 100644 --- a/articles/iot-hub/iot-hub-raspberry-pi-kit-node-get-started.md +++ b/articles/iot-hub/iot-hub-raspberry-pi-kit-node-get-started.md @@ -7,8 +7,6 @@ author: shizn manager: timtl tags: '' keywords: 'azure iot raspberry pi, raspberry pi iot hub, raspberry pi send data to cloud, raspberry pi to cloud' -experimental: true -experiment_id: "xshi-happypathemu-20161202" ms.assetid: b0e14bfa-8e64-440a-a6ec-e507ca0f76ba ms.service: iot-hub diff --git a/articles/log-analytics/log-analytics-template-workspace-configuration.md b/articles/log-analytics/log-analytics-template-workspace-configuration.md index 35cf611dc4a08..686a09d3a833e 100644 --- a/articles/log-analytics/log-analytics-template-workspace-configuration.md +++ b/articles/log-analytics/log-analytics-template-workspace-configuration.md @@ -13,7 +13,7 @@ ms.workload: na ms.tgt_pltfrm: na ms.devlang: json ms.topic: article -ms.date: 11/01/2016 +ms.date: 06/01/2017 ms.author: richrund --- @@ -127,7 +127,7 @@ The following template sample illustrates how to: "sku": { "Name": "[parameters('serviceTier')]" }, - "retentionInDays": "[parameters('dataRetention')]" + "retention": "[parameters('dataRetention')]" }, "resources": [ { diff --git a/articles/media-services/TOC.md b/articles/media-services/TOC.md index 9885b684e2142..f9ab2fb4a831d 100644 --- a/articles/media-services/TOC.md +++ b/articles/media-services/TOC.md @@ -214,7 +214,7 @@ ## [PowerShell (Resource Manager)](/powershell/module/azurerm.media) ## [PowerShell (Service Management)](/powershell/module/azure/?view=azuresmps-3.7.0) ## [.NET](/dotnet/api/microsoft.windowsazure.mediaservices.client) -## [REST](/rest/api/media) +## [REST](/rest/api/media/mediaservice) # Resources ## [Release notes](media-services-release-notes.md) diff --git a/articles/operations-management-suite/TOC.md b/articles/operations-management-suite/TOC.md index b0984fd50fcfc..660c0bf9328f4 100644 --- a/articles/operations-management-suite/TOC.md +++ b/articles/operations-management-suite/TOC.md @@ -3,8 +3,10 @@ ## [OMS architecture](operations-management-suite-architecture.md) # Get started -## Walkthroughs -### [Service Map](operations-management-suite-walkthrough-servicemap.md) +## [Log Analytics](../log-analytics/log-analytics-get-started.md) +## [Automation](../automation/automation-offering-get-started.md) +## [Backup](../backup/backup-introduction-to-azure-backup.md) +## [Site Recovery](../site-recovery/site-recovery-overview.md) # How to @@ -35,7 +37,9 @@ #### [AD Assessment](../log-analytics/log-analytics-ad-assessment.md?toc=%2fazure%2foperations-management-suite%2ftoc.json) #### [AD Replication Status](../log-analytics/log-analytics-ad-replication-status.md?toc=%2fazure%2foperations-management-suite%2ftoc.json) #### [Alert Management](../log-analytics/log-analytics-solution-alert-management.md?toc=%2fazure%2foperations-management-suite%2ftoc.json) -#### [Service Map](operations-management-suite-service-map.md) +#### Service Map +##### [Walkthrough](operations-management-suite-walkthrough-servicemap.md) +##### [Use](operations-management-suite-service-map.md) ##### [Configure](operations-management-suite-service-map-configure.md) #### [Azure Networking Analytics](../log-analytics/log-analytics-azure-networking-analytics.md?toc=%2fazure%2foperations-management-suite%2ftoc.json) #### [Containers](../log-analytics/log-analytics-containers.md?toc=%2fazure%2foperations-management-suite%2ftoc.json) diff --git a/articles/postgresql/concepts-limits.md b/articles/postgresql/concepts-limits.md index ac08430cf8849..cd8dd6f85c7ea 100644 --- a/articles/postgresql/concepts-limits.md +++ b/articles/postgresql/concepts-limits.md @@ -11,7 +11,7 @@ ms.service: postgresql-database ms.tgt_pltfrm: portal ms.custom: mvc ms.topic: article -ms.date: 05/31/2017 +ms.date: 06/01/2017 --- # Limitations in Azure Database for PostgreSQL The Azure Database for PostgreSQL service is in public preview. The following sections describe capacity and functional limits in the database service. @@ -40,23 +40,23 @@ There is a maximum number of connections, compute units, and storage in each ser When too many connections are reached, you may receive the following error: > FATAL: sorry, too many clients already -## Preview functional limitations: -### Scale operations: +## Preview functional limitations +### Scale operations 1. Dynamic scaling of servers across service tiers is currently not supported. That is, switching between Basic and Standard service tiers. 2. Dynamic on-demand increase of storage on pre-created server is currently not supported. 3. Decreasing server storage size is not supported. -### Server version upgrades: +### Server version upgrades - Automated migration between major database engine versions is currently not supported. -### Subscription management: +### Subscription management - Dynamically moving pre-created servers across subscription and resource group is currently not supported. -### Point-in-time-restore: +### Point-in-time-restore 1. Restoring to different service tier and/or Compute Units and Storage size is not allowed. 2. Restoring a dropped server is not supported. -## Next steps: +## Next steps - Understand [What’s available in each pricing tier](concepts-service-tiers.md) - Understand [Supported PostgreSQL Database Versions](concepts-supported-versions.md) - Review [How To Back up and Restore a server in Azure Database for PostgreSQL using the Azure portal](howto-restore-server-portal.md) diff --git a/articles/postgresql/quickstart-create-server-database-azure-cli.md b/articles/postgresql/quickstart-create-server-database-azure-cli.md index 75c954410e1ba..21721aa4bcf7c 100644 --- a/articles/postgresql/quickstart-create-server-database-azure-cli.md +++ b/articles/postgresql/quickstart-create-server-database-azure-cli.md @@ -17,7 +17,7 @@ ms.date: 05/31/2017 # Create an Azure Database for PostgreSQL using the Azure CLI Azure Database for PostgreSQL is a managed service that enables you to run, manage, and scale highly available PostgreSQL databases in the cloud. The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart shows you how to create an Azure Database for PostgreSQL server in an [Azure resource group](https://docs.microsoft.com/azure/azure-resource-manager/resource-group-overview) using the Azure CLI. -You may use the Azure Cloud Shell in the browser, or use [Install Azure CLI 2.0]( /cli/azure/install-azure-cli) on your own computer to run the code blocks in this tutorial. +You may use the Azure Cloud Shell in the browser to run these Azure CLI commands, or [Install Azure CLI 2.0]( /cli/azure/install-azure-cli) on your own computer. [!INCLUDE [cloud-shell-try-it](../../includes/cloud-shell-try-it.md)] diff --git a/articles/security/governance-in-azure.md b/articles/security/governance-in-azure.md new file mode 100644 index 0000000000000..7a5b1f796f88c --- /dev/null +++ b/articles/security/governance-in-azure.md @@ -0,0 +1,492 @@ +--- + +title: Governance in Azure | Microsoft Docs +description: Learn about cloud-based computing services that include a wide selection of compute instances & services that can scale up and down automatically to meet the needs of your application or enterprise. +services: security +documentationcenter: na +author: UnifyCloud +manager: swadhwa +editor: TomSh + +ms.assetid: +ms.service: security +ms.devlang: na +ms.topic: article +ms.tgt_pltfrm: na +ms.workload: na +ms.date: 06/01/2017 +ms.author: TomSh + +--- + +# Governance in Azure + +We know that security is job one in the cloud and how important it is that you find accurate and timely information about Azure security. One of the best reasons to use Azure for your applications and services is to take advantage of its wide array of security tools and capabilities. These tools and capabilities help make it possible to create secure solutions on the secure Azure platform. + +To help you better understand the array of Governance controls implemented within Microsoft Azure from both the customer's and Microsoft operations' perspectives, this article, "Governance in Azure", is written that provides a comprehensive look at the Governance features available with Microsoft Azure. + +## Azure platform + +Azure is a public cloud service platform that supports a broad selection of operating systems, programming languages, frameworks, tools, databases and devices. It can run Linux containers with Dockers integration; build apps with JavaScript, Python, .NET, PHP, Java and Node.js; build back-ends for iOS, Android and Windows devices. Azure public cloud services support the same technologies millions of developers and IT professionals already rely on and trust. + +When you build on, or migrate IT assets to, a public cloud service provider you are relying on that organization's abilities to protect your applications and data with the services and the controls they provide to manage the security of your cloud-based assets. + +Azure's infrastructure is designed from the facility to applications for hosting millions of customers simultaneously, and it provides a trustworthy foundation upon which businesses can meet their security requirements. In addition, Azure provides you many security options and the ability to control them so that you can customize security to meet the unique requirements of your organization's deployments. + +This document will help you understand how Azure Governance capabilities can help you fulfill these requirements. + +## Abstract + +Microsoft Azure cloud governance provides an integrated audit and consulting approach for reviewing and advising organizations on their usage of the Azure platform. Microsoft Azure cloud governance refers to the decision-making processes, criteria and policies involved in the planning, architecture, acquisition, deployment, operation and management of a Cloud computing. + +To create a plan for Microsoft Azure cloud governance, you need to take an in-depth look at the people, processes, and technologies currently in place, and then build frameworks that make it easy for IT to consistently support business needs while providing end users with the flexibility to use the powerful features of Microsoft Azure. + +This paper describes how you can achieve an elevated level of governance of your IT resources in Microsoft Azure. This paper can help you understand the security and governance features built in to Microsoft Azure. + +The following are main the governance issues discussed in this paper: + +- Implementation of policies, processes and procedures as per organization goals. + +- Security and continuous compliance with organization standards + +- Alerting and Monitoring + +## Implementation of policies, processes and procedures + +Management has established roles and responsibilities to oversee implementation of the information security policy and operational continuity across Azure. Microsoft Azure management is responsible for overseeing security and continuity practices within their respective teams (including third parties), and facilitating compliance with security policies, processes and standards. + +Here are the factors evolved: + +- Account Provisioning + +- Subscription Controls + +- Role Based access controls + +- Resource Management + +- Resource tracking + +- Critical Resource Control + +- API Access to Billing Information + +- Networking Controls + +## Account provisioning + +Defining account hierarchy is a major step to use and structure Azure services within a company and is the core governance structure. In case of customers with the enterprise agreement, customers can further subdivide the environment into departments, accounts, and finally, subscriptions. + +![Account Provisioning](./media/governance-in-azure/security-governance-in-azure-fig1.png) + +If you do not have an enterprise agreement, consider using [Azure tags](https://docs.microsoft.com/azure/azure-resource-manager/resource-group-using-tags) at subscription level to define hierarchy. An Azure subscription is the basic unit where all resources are contained. It also defines several limits within Azure, such as number of cores, resources, etc. Subscriptions can contain [Resource Groups](https://docs.microsoft.com/azure/azure-resource-manager/resource-group-overview), which can contain Resources. [RBAC](https://docs.microsoft.com/azure/api-management/api-management-role-based-access-control) principles apply on those three levels. + +Every enterprise is different and the hierarchy using Azure Tags in case of non-enterprise customers allows for significant flexibility in how Azure is organized within the company. Before deploying resources in Microsoft Azure, you should model hierarchy and understand the impact on billing, resource access, and complexity. + +## Subscription controls + +Subscription controls how resources usage is reported and billed. Subscriptions can be setup for separate billing and payment. As mentioned earlier under one Azure account we can have multiple subscriptions. Subscriptions can be used to determine the Azure resource usage of multiple departments in a company. + +For example, if a company has IT, HR and Marketing departments and these departments have different projects running. Based on the usage of Azure resources like virtual machines by each department, they can be billed accordingly. By this we can control the finances of each department. + +Azure subscriptions establish three parameters: + +- a unique subscriber ID + +- a billing location + +- Set of available resources + +For an individual, that would include one Microsoft account ID, a credit card number and the full suite of Azure services -- although, Microsoft enforces consumption limits, depending on the subscription type. + +Azure enrollment hierarchies define how services are structured within an Enterprise Agreement. The Enterprise Portal allows customers to divide access to Azure resources associated with an Enterprise Agreement based on flexible hierarchies customizable to an organization's unique needs. The hierarchy pattern should match an organization's management and geographic structure so that the associated billing and resource access can be accurately accounted for. + +The three high-level patterns are functional, business unit, and geographic, using departments as an administrative construct for account groupings. Within each department, accounts can be assigned subscriptions, which create silos for billing and several key limits in Azure (e.g., number of VMs, storage accounts, etc.). + +![Subscription controls](./media/governance-in-azure/security-governance-in-azure-fig2.png) + + +For organizations with an Enterprise Agreement, Azure subscriptions follow a four-level hierarchy: + +- enterprise enrolment administrator + +- department administrator + +- account owner + +- Service administrator + +This hierarchy governs the following: + +- Billing relationship + +- Account administration + +- Role Based Access Control (RBAC) to artifacts + +- Boundaries/Limits + +- Boundaries + + - Usage and billing (rate card based on offer numbers) + + - Limits + + - Virtual Network + +- Attached to 1 AAD (1 AAD be associated with many subscriptions) + +- Associated to an enterprise enrollment account + +## Role-based access controls + +When Azure was initially released, access controls to a subscription were basic: Administrator or Co-Administrator. Access to a subscription in the Classic model implied access to all the resources in the portal. This lack of fine-grained control led to the proliferation of subscriptions to provide a level of reasonable access control for an Azure Enrollment. + +![Role-based access controls](./media/governance-in-azure/security-governance-in-azure-fig3.png) + +This proliferation of subscriptions is no longer needed. With role-based access control, you can assign users to standard roles (such as common "reader" and "writer" types of roles). You can also define custom roles. + +[Azure Role-Based Access Control (RBAC)](https://docs.microsoft.com/azure/active-directory/role-based-access-built-in-roles) enables fine-grained access management for Azure. Using RBAC, you can grant only the amount of access that users need to perform their jobs. Security-oriented companies should focus on giving employees the exact permissions they need. Too many permissions expose an account to attackers. Too few permissions mean that employees can't get their work done efficiently. Azure Role-Based Access Control (RBAC) helps address this problem by offering fine-grained access management for Azure. RBAC helps you to segregate duties within your team and grant only the amount of access to users that they need to perform their jobs. Instead of giving everybody unrestricted permissions in your Azure subscription or resources, you can allow only certain actions. + +For example, use RBAC to let one employee manage virtual machines in a subscription, while another can manage SQL databases within the same subscription. + +Azure RBAC has three basic roles that apply to all resource types: + +- **Owner** has full access to all resources including the right to delegate access to others. + +- **Contributor** can create and manage all types of Azure resources but can't grant access to others. + +- **Reader** can view existing Azure resources. + +The rest of the RBAC roles in Azure allow management of specific Azure resources. For example, the Virtual Machine Contributor role allows the user to create and manage virtual machines. It does not give them access to the virtual network or the subnet that the virtual machine connects to. + +[RBAC built-in roles](https://docs.microsoft.com/azure/active-directory/role-based-access-built-in-roles) lists the roles available in Azure. It specifies the operations and scope that each built-in role grants to users. + +Grant access by assigning the appropriate RBAC role to users, groups, and applications at a certain scope. The scope of a role assignment can be a subscription, a resource group, or a single resource. A role assigned at a parent scope also grants access to the children contained within it. + +For example, a user with access to a resource group can manage all the resources it contains, like websites, virtual machines, and subnets. + +Azure RBAC only supports management operations of the Azure resources in the Azure portal and Azure Resource Manager APIs. It cannot authorize all data level operations for Azure resources. For example, you can authorize someone to manage Storage Accounts, but not to the blobs or tables within a Storage Account cannot. Similarly, a SQL database can be managed, but not the tables within it. + +If you want more details about how RBAC helps you manage access, see [What is Role-Based Access Control](https://docs.microsoft.com/azure/active-directory/role-based-access-control-what-is). + +You can also [create a custom role](https://docs.microsoft.com/azure/active-directory/role-based-access-control-custom-roles) in Azure Role-Based Access Control (RBAC) if none of the built-in roles meet your specific access needs. Custom roles can be created using [Azure PowerShell](https://docs.microsoft.com/azure/active-directory/role-based-access-control-manage-access-powershell), [Azure Command-Line Interface (CLI)](https://docs.microsoft.com/azure/active-directory/role-based-access-control-manage-access-azure-cli), and the [REST API](https://docs.microsoft.com/azure/active-directory/role-based-access-control-manage-access-rest). Just like built-in roles, custom roles can be assigned to users, groups, and applications at subscription, resource group, and resource scopes. + +Within each subscription, you can grant up to 2000 role assignments. + +## Resource management + +Azure originally provided only the classic deployment model. In this model, each resource existed independently; there was no way to group related resources together. Instead, you had to manually track which resources made up your solution or application, and remember to manage them in a coordinated approach. + +To deploy a solution, you had to either create each resource individually through the classic portal or create a script that deployed all the resources in the correct order. To delete a solution, you had to delete each resource individually. You could not easily apply and update access control policies for related resources. Finally, you could not apply tags to resources to label them with terms that help you monitor your resources and manage billing. + +In 2014, Azure introduced Resource Manager, which added the concept of a resource group. A resource group is a container for resources that share a common lifecycle. The Resource Manager deployment model provides several benefits: + +- You can deploy, manage, and monitor all the services for your solution as a group, rather than handling these services individually. + +- You can repeatedly deploy your solution throughout its lifecycle and have confidence your resources are deployed in a consistent state. + +- You can apply access control to all resources in your resource group, and those policies are automatically applied when new resources are added to the resource group. + +- You can apply tags to resources to logically organize all the resources in your subscription. + +- You can use JavaScript Object Notation (JSON) to define the infrastructure for your solution. The JSON file is known as a Resource Manager template. + +- You can define the dependencies between resources so they are deployed in the correct order. + +![Resource Management](./media/governance-in-azure/security-governance-in-azure-fig4.png) + +Resource Manager enables you to put resources into meaningful groups for management, billing, or natural affinity. As mentioned earlier, Azure has two deployment models. In the earlier [Classic model](https://docs.microsoft.com/azure/azure-resource-manager/resource-manager-deployment-model), the basic unit of management was the subscription. It was difficult to break down resources within a subscription, which led to the creation of large numbers of subscriptions. With the Resource Manager model, we saw the introduction of resource groups. + +A resource group is a container that holds related resources for an Azure solution. [The resource group](https://docs.microsoft.com/azure/azure-resource-manager/resource-group-overview) can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to allocate resources to resource groups based on what makes the most sense for your organization. + +For recommendations about templates, see [Best practices for creating Azure Resource Manager templates](https://docs.microsoft.com/azure/azure-resource-manager/resource-manager-template-best-practices). + +Azure Resource Manager analyzes dependencies to ensure resources are created in the correct order. If one resource relies on a value from another resource (such as a virtual machine needing a storage account for disks), you set a dependency. + +>[!Note] +>For more information, see [Defining dependencies in Azure Resource Manager templates](https://docs.microsoft.com/azure/azure-resource-manager/resource-group-define-dependencies). + +You can also use the template for updates to the infrastructure. For example, you can add a resource to your solution and add configuration rules for the resources that are already deployed. If the template specifies creating a resource but that resource already exists, Azure Resource Manager performs an update instead of creating a new asset. Azure Resource Manager updates the existing asset to the same state as it would be as new. + +Resource Manager provides extensions for scenarios when you need additional operations such as installing software that is not included in the setup. + +## Resource tracking + +As users in your organization add resources to the subscription, it becomes increasingly important to associate resources with the appropriate department, customer, and environment. You can attach metadata to resources through tags. You use [tags](https://docs.microsoft.com/azure/azure-resource-manager/resource-group-using-tags) to provide information about the resource or the owner. Tags enable you to not only aggregate and group resources in several ways, but use that data for the purposes of chargeback. + +Use tags when you have a complex collection of resource groups and resources, and need to visualize those assets in the way that makes the most sense to you. For example, you could tag resources that serve a similar role in your organization or belong to the same department. + +Without tags, users in your organization can create multiple resources that may be difficult to later identify and manage. For example, you may wish to delete all the resources for a project. If those resources are not tagged for the project, you must manually find them. Tagging can be an important way for you to reduce unnecessary costs in your subscription. + +Resources do not need to reside in the same resource group to share a tag. You can create your own tag taxonomy to ensure that all users in your organization use common tags rather than users inadvertently applying slightly different tags (such as "dept" instead of "department"). + +Resource policies enable you to create standard rules for your organization. You can create policies that ensure resources are tagged with the appropriate values. + +> [!Note] +> For more information, see [Apply resource policies for tags](https://docs.microsoft.com/azure/azure-resource-manager/resource-manager-policy-tags). + +You can also view tagged resources through the Azure portal. + +The [usage report](https://docs.microsoft.com/azure/billing/billing-understand-your-bill) for your subscription includes tag names and values, which enables you to break out costs by tags. + +> [!Note] +> For more information about tags, see [Using tags to organize your Azure resources](https://docs.microsoft.com/azure/azure-resource-manager/resource-group-using-tags). + +The following limitations apply to tags: + +- Each resource or resource group can have a maximum of 15 tag key/value pairs. This limitation only applies to tags directly applied to the resource group or resource. A resource group can contain many resources that each have 15 tag key/value pairs. + +- The tag name is limited to 512 characters. + +- The tag value is limited to 256 characters. + +- Tags applied to the resource group are not inherited by the resources in that resource group. + +If you have more than 15 values that you need to associate with a resource, use a JSON string for the tag value. The JSON string can contain many values that are applied to a single tag key. + +### Tags and billing + +Tags enable you to group your billing data. For example, if you are running multiple VMs for different organizations, use the tags to group usage by cost center. You can also use tags to categorize costs by runtime environment; such as, the billing usage for VMs running in production environment. + +You can retrieve information about tags through the [Azure Resource Usage and RateCard APIs](https://docs.microsoft.com/azure/billing/billing-usage-rate-card-overview) or the usage comma-separated values (CSV) file. You download the usage file from the [Azure accounts portal](https://account.windowsazure.com/) or [EA portal](https://ea.azure.com/). + +>[!Note] +> For more information about programmatic access to billing information, see [Gain insights into your Microsoft Azure resource consumption](https://docs.microsoft.com/azure/billing/billing-usage-rate-card-overview). For REST API operations, see [Azure Billing REST API Reference](https://msdn.microsoft.com/library/azure/1ea5b323-54bb-423d-916f-190de96c6a3c). + +When you download the usage CSV for services that support tags with billing, the tags appear in the Tags column. + +## Critical resource controls + +As your organization adds core services to the subscription, it becomes increasingly important to ensure that those services are available to avoid business disruption. [Resource locks](https://docs.microsoft.com/azure/azure-resource-manager/resource-group-lock-resources) enable you to restrict operations on high-value resources where modifying or deleting them would have a significant impact on your applications or cloud infrastructure. You can apply locks to a subscription, resource group, or resource. Typically, you apply locks to foundational resources such as virtual networks, gateways, and storage accounts. + +Resource locks currently support two values: CanNotDelete and ReadOnly. CanNotDelete means that users (with the appropriate rights) can still read or modify a resource but cannot delete it. ReadOnly means that authorized users can't delete or modify a resource. + +Resource Manager Locks apply only to operations that happen in the management plane, which consists of operations sent to . The locks do not restrict how resources perform their own functions. Resource changes are restricted, but resource operations are not restricted. For example, a ReadOnly lock on a SQL Database prevents you from deleting or modifying the database, but it does not prevent you from creating, updating, or deleting data in the database. + +Applying **ReadOnly** can lead to unexpected results because some operations that seem like read operations require additional actions. For example, placing a **ReadOnly** lock on a storage account prevents all users from listing the keys. The list keys operation is handled through a POST request because the returned keys are available for write operations. + +![Critical Resource Controls](./media/governance-in-azure/security-governance-in-azure-fig5.png) + +For another example, placing a ReadOnly lock on an App Service resource prevents Visual Studio Server Explorer from displaying files for the resource because that interaction requires write access. + +Unlike role-based access control, you use management locks to apply a restriction across all users and roles. To learn about setting permissions for users and roles, see [Azure Role-based Access Control](https://docs.microsoft.com/azure/active-directory/role-based-access-control-configure). + +When you apply a lock at a parent scope, all resources within that scope inherit the same lock. Even resources you add later inherit the lock from the parent. The most restrictive lock in the inheritance takes precedence. + +To create or delete management locks, you must have access to Microsoft.Authorization/ _or Microsoft.Authorization/locks/_ actions. Of the built-in roles, only **Owner** and **User Access Administrator** are granted those actions. + +## API access to billing information + +Use Azure Billing APIs to pull usage and resource data into your preferred data analysis tools. The Azure Resource Usage and RateCard APIs can help you accurately predict and manage your costs. The APIs are implemented as a Resource Provider and part of the family of APIs exposed by the Azure Resource Manager. + +### Azure resource usage API (Preview) + +Use the Azure [Resource Usage API](https://msdn.microsoft.com/library/azure/mt219003) to get your estimated Azure consumption data. The API includes: + +- **Azure Role-based Access Control** - Configure access policies on the [Azure portal](https://portal.azure.com/) or through [Azure PowerShell cmdlets](https://docs.microsoft.com/powershell/azure/overview) to specify which users or applications can get access to the subscription's usage data. Callers must use standard Azure Active Directory tokens for authentication. Add the caller to either the Billing Reader, Reader, Owner, or Contributor role to get access to the usage data for a specific Azure subscription. + +- **Hourly or Daily Aggregations** - Callers can specify whether they want their Azure usage data in hourly buckets or daily buckets. The default is daily. + +- **Instance metadata (includes resource tags)** – Get instance-level detail like the fully qualified resource uri (/subscriptions/{subscription-id} /..), the resource group information, and resource tags. This metadata helps you deterministically and programmatically allocate usage by the tags, for use-cases like cross-charging. + +- **Resource metadata** - Resource details such as the meter name, meter category, meter sub category, unit, and region give the caller a better understanding of what was consumed. We're also working to align resource metadata terminology across the Azure portal, Azure usage CSV, EA billing CSV, and other public-facing experiences, to let you correlate data across experiences. + +- **Usage for all offer types** – Usage data is available for all offer types like Pay-as-you-go, MSDN, Monetary commitment, Monetary credit, and EA. + +**Azure resource RateCard API (Preview)** + +Use the Azure Resource RateCard API to get the list of available Azure resources and estimated pricing information for each. The API includes: + +- **Azure Role-based Access Control** - Configure your access policies on the Azure portal or through Azure PowerShell cmdlets to specify which users or applications can get access to the RateCard data. Callers must use standard Azure Active Directory tokens for authentication. Add the caller to either the Reader, Owner, or Contributor role to get access to the usage data for a particular Azure subscription. + +- **Support for Pay-as-you-go, MSDN, Monetary commitment, and Monetary credit offers (EA not supported)** - This API provides Azure offer-level rate information. The caller of this API must pass in the offer information to get resource details and rates. We're currently unable to provide EA rates because EA offers have customized rates per enrollment. Here are some of the scenarios that are made possible with the combination of the Usage and the RateCard APIs: + +- **Azure spend during the month** - Use the combination of the Usage and RateCard APIs to get better insights into your cloud spend during the month. You can analyze the hourly and daily buckets of usage and charge estimates. + +- **Set up alerts** – Use the Usage and the RateCard APIs to get estimated cloud consumption and charges, and set up resource-based or monetary-based alerts. + +- **Predict bill** – Get your estimated consumption and cloud spend, and apply machine learning algorithms to predict what the bill would be at the end of the billing cycle. + +- **Pre-consumption cost analysis** – Use the RateCard API to predict how much your bill would be for your expected usage when you move your workloads to Azure. If you have existing workloads in other clouds or private clouds, you can also map your usage with the Azure rates to get a better estimate of Azure spend. This estimate gives you the ability to pivot on offer, and compare between the different offer types beyond Pay-As-You-Go, like monetary commitment and monetary credit. The API also gives you the ability to see cost differences by region and allows you to do a what-if cost analysis to help you make deployment decisions. + +- **What-if analysis** - You can determine whether it is more cost-effective to run workloads in another region, or on another configuration of the Azure resource. Azure resource costs may differ based on the Azure region you're using. + +- You can also determine if another Azure offer type gives a better rate on an Azure resource. + +## Networking controls + +Access to resources can be either internal (within the corporation's network) or external (through the internet). It is easy for users in your organization to inadvertently put resources in the wrong spot, and potentially open them to malicious access. As with on premises/ devices, enterprises must add appropriate controls to ensure that Azure users make the right decisions. + +![Networking Controls](./media/governance-in-azure/security-governance-in-azure-fig6.png) + +For subscription governance, we identify core resources that provide basic control of access. The core resources consist of: + +### Network connectivity + +[Virtual Networks](https://docs.microsoft.com/azure/virtual-network/virtual-networks-overview) are container objects for subnets. Though not strictly necessary, it is often used when connecting applications to internal corporate resources. The Azure Virtual Network service enables you to securely connect Azure resources to each other with virtual networks (VNets). + +A VNet is a representation of your own network in the cloud. A VNet is a logical isolation of the Azure cloud dedicated to your subscription. You can also connect VNets to your on-premises network. + +Following are capabilities for Azure Virtual Networks: + +- **Isolation**: VNets are isolated from one another. You can create separate VNets for development, testing, and production that use the same CIDR address blocks. Conversely, you can create multiple VNets that use different CIDR address blocks and connect networks together. You can segment a VNet into multiple subnets. Azure provides internal name resolution for VMs and Cloud Services role instances connected to a VNet. You can optionally configure a VNet to use your own DNS servers, instead of using Azure internal name resolution. + +- **Internet connectivity**: All Azure Virtual Machines (VM) and Cloud Services role instances connected to a VNet have access to the Internet, by default. You can also enable inbound access to specific resources, as needed. + +- **Azure resource connectivity**: Azure resources such as Cloud Services and VMs can be connected to the same VNet. The resources can connect to each other using private IP addresses, even if they are in different subnets. Azure provides default routing between subnets, VNets, and on-premises networks, so you don't have to configure and manage routes. + +- **VNet connectivity**: VNets can be connected to each other, enabling resources connected to any VNet to communicate with any resource on any other VNet. + +- **On-premises connectivity**: VNets can be connected to on-premises networks through private network connections between your network and Azure, or through a site-to-site VPN connection over the Internet. + +- **Traffic filtering**: VM and Cloud Services role instances network traffic can be filtered inbound and outbound by source IP address and port, destination IP address and port, and protocol. + +- **Routing**: You can optionally override Azure's default routing by configuring your own routes, or using BGP routes through a network gateway. + +## Network access controls + +[Network security groups](https://docs.microsoft.com/azure/virtual-network/virtual-networks-nsg) are like a firewall and provide rules for how a resource can "talk" over the network. They provide granular control over how/if a subnet (or virtual machine) can connect to the Internet or other subnets in the same virtual network. + +A network security group (NSG) contains a list of security rules that allow or deny network traffic to resources connected to Azure Virtual Networks (VNet). NSGs can be associated to subnets, individual VMs (classic), or individual network interfaces (NIC) attached to VMs (Resource Manager). + +When an NSG is associated to a subnet, the rules apply to all resources connected to the subnet. Traffic can further be restricted by also associating an NSG to a VM or NIC. + +## Security and continuous compliance with organizational standards + +Every business has different needs, and every business will reap distinct benefits from cloud solutions. Still, customers of all kinds have the same basic concerns about moving to the cloud. They want to retain control of their data, and they want that data to be kept secure and private, all while maintaining transparency and compliance. + +What customers want from cloud providers is: + +- **Secure our data** while acknowledging that the cloud can provide increased data security and administrative control, IT leaders are still concerned that migrating to the cloud will leave them more vulnerable to hackers than their current in-house solutions. + +- **Keep our data** private Cloud services raise unique privacy challenges for businesses. As companies look to the cloud to save on infrastructure costs and improve their flexibility, they also worry about losing control of where their data is stored, who is accessing it, and how it gets used. + +- **Give us control** Even as they take advantage of the cloud to deploy more innovative solutions, companies are very concerned about losing control of their data. The recent disclosures of government agencies accessing customer data, through both legal and extra-legal means, make some CIOs wary of storing their data in the cloud. + +- **Promote transparency** While security, privacy, and control are important to business decision-makers, they also want the ability to independently verify how their data is being stored, accessed, and secured. + +- **Maintain compliance** as companies expand their use of cloud technologies, the complexity and scope of standards and regulations continue to evolve. Companies need to know that their compliance standards will be met, and that compliance will evolve as regulations change over time. + +## Security configuration, monitoring and alerting + +Azure subscribers may manage their cloud environments from multiple devices, including management workstations, developer PCs, and even privileged end-user devices that have task-specific permissions. In some cases, administrative functions are performed through web-based consoles such as the Azure portal. In other cases, there may be direct connections to Azure from on-premises systems over Virtual Private Networks (VPNs), Terminal Services, client application protocols, or (programmatically) the Azure Service Management API (SMAPI). Additionally, client endpoints can be either domain joined or isolated and unmanaged, such as tablets or smartphones. + +Although multiple access and management capabilities provide a rich set of options, this variability can add significant risk to a cloud deployment. It can be difficult to manage, track, and audit administrative actions. This variability may also introduce security threats through unregulated access to client endpoints that are used for managing cloud services. Using general or personal workstations for developing and managing infrastructure opens unpredictable threat vectors such as web browsing (for example, watering hole attacks) or email (for example, social engineering and phishing). + +Monitoring, logging, and auditing provide a basis for tracking and understanding administrative activities, but it may not always be feasible to audit all actions in complete detail due to the amount of data generated. Auditing the effectiveness of the management policies is a best practice, however. + +Azure security Governance from AD DS GPOs to control all the administrators' Windows interfaces, such as file sharing. Include management workstations in auditing, monitoring, and logging processes. Track all administrator and developer access and usage. + +### Azure security center + +The [Azure Security Center](https://docs.microsoft.com/azure/security-center/security-center-intro) provides a central view of the security status of resources in the subscriptions, and provides recommendations that help prevent compromised resources. It can enable more granular policies (for example, applying policies to specific resource groups that allow the enterprise to tailor their posture to the risk they are addressing). + +![Azure Security Center](./media/governance-in-azure/security-governance-in-azure-fig7.png) + +Security Center provides integrated security monitoring and policy management across your Azure subscriptions, helps detect threats that might otherwise go unnoticed, and works with a broad ecosystem of security solutions. After you enable [security policies](https://docs.microsoft.com/azure/security-center/security-center-policies) for a subscription's resources, Security Center analyzes the security of your resources to identify potential vulnerabilities. Information about your network configuration is available instantly. + +Azure Security Center represents a combination of best practice analysis and security policy management for all resources within an Azure subscription. This powerful and easy to use tool allows security teams and risk officers to prevent, detect, and respond to security threats as it automatically collects and analyzes security data from your Azure resources, the network, and partner solutions like anti-malware programs and firewalls. + +In addition, Azure Security Center applies advanced analytics, including machine learning and behavioral analysis while leveraging global threat intelligence from Microsoft products and services, the Microsoft Digital Crimes Unit (DCU), the Microsoft Security Response Center (MSRC), and external feeds. [Security governance](https://www.credera.com/blog/credera-site/azure-governance-part-4-other-tools-in-the-toolbox/) can be applied broadly at the subscription level or narrowed down to specific, granular requirements applied to individual resources through policy definition. + +Finally, Azure Security Center analyzes resource security health based on those policies and uses this to provide insightful dashboards and alerting for events such as malware detection or malicious IP connection attempts. + +>[!Note] +> For more information about how to apply recommendations, read [Implementing security recommendations in Azure Security Center](https://docs.microsoft.com/azure/security-center/security-center-recommendations). + +Security Center collects data from your virtual machines to assess their security state, provide security recommendations, and alert you to threats. When you first access Security Center, data collection is enabled on all virtual machines in your subscription. Data collection is recommended but you can opt-out by [disabling data collection](https://docs.microsoft.com/azure/security-center/security-center-faq) in the Security Center policy. + +Finally, Azure Security Center is an open platform that enables Microsoft partners and independent software vendors to create software that plugs into Azure Security Center to enhance its capabilities. + +Azure Security Center monitors the following Azure resources: + +- Virtual machines (VMs) (including Cloud Services) + +- Azure Virtual Networks + +- Azure SQL service + +- Partner solutions integrated with your Azure subscription such as a web application firewall on VMs and on [App Service Environment](https://docs.microsoft.com/azure/app-service/app-service-app-service-environments-readme). + +### Operations Management Suite + +The OMS software development and service team's information security and [governance program](https://github.com/Microsoft/azure-docs/blob/master/articles/log-analytics/log-analytics-security.md) supports its business requirements and adheres to laws and regulations as described at [Microsoft Azure Trust Center](https://azure.microsoft.com/support/trust-center/) and [Microsoft Trust Center Compliance](https://www.microsoft.com/TrustCenter/Compliance/default.aspx). How OMS establish security requirements, identifies security controls, manages, and monitors risks are also described there. Annually, we review polices, standards, procedures, and guidelines. + +Each OMS development team member receives formal application security training. Internally, we use a version control system for software development. Each software project is protected by the version control system. + +Microsoft has a security and compliance team that oversees and assesses all services in Microsoft. Information security officers make up the team and they are not associated with the engineering departments that develop OMS. The security officers have their own management chain and conduct independent assessments of products and services to ensure security and compliance. + +Operations Management Suite (also known as OMS) is a collection of management services that were designed in the cloud from the start. Rather than deploying and managing on premises resources, OMS components are entirely hosted in Azure. Configuration is minimal, and you can be up and running literally in a matter of minutes. + +![Operation Manager Suite](./media/governance-in-azure/security-governance-in-azure-fig8.png) + +Just because OMS services run in the cloud doesn't mean that they can't effectively manage your on-premises environment. + +Put an agent on any Windows or Linux computer in your data center, and it will send data to Log Analytics where it can be analyzed along with all other data collected from cloud or on premises services. Use Azure Backup and Azure Site Recovery to leverage the cloud for backup and high availability for on premises resources. + +Runbooks in the cloud can't typically access your on-premises resources, but you can install an agent on one or more computers too that will host runbooks in your data center. When you start a runbook, you simply specify whether you want it to run in the cloud or on a local worker. + +The core functionality of OMS is provided by a set of services that run in Azure. Each service provides a specific management function, and you can combine services to achieve different management scenarios. + +![Operation Manager Suite](./media/governance-in-azure/security-governance-in-azure-fig9.JPG) + +Azure operation manager extends its functionalities by providing management solutions. [Management Solutions](https://docs.microsoft.com/azure/operations-management-suite/operations-management-suite-solutions) are prepackaged sets of logic that implement a management scenario leveraging one or more OMS services. + +![Azure operation manage](./media/governance-in-azure/security-governance-in-azure-fig10.png) + +Different solutions are available from Microsoft and from partners that you can easily add to your Azure subscription to increase the value of your investment in OMS. + +As a partner, you can create your own solutions to support your applications and services and provide them to users through the Azure Marketplace or Quick Start Templates. + +## Performance alerting and monitoring + +### Alerting + +Alerts are a method of monitoring Azure resource metrics, events, or logs and being notified when a condition you specify is met. + +**Alerts in different Azure services** + +Alerts are available across different services, including: + +- Application Insights: Enables web test and metric alerts. + +>[!Note] +> See [Set alerts in Application Insights](https://docs.microsoft.com/azure/application-insights/app-insights-alerts) and [Monitor availability and responsiveness of any website](https://docs.microsoft.com/azure/application-insights/app-insights-monitor-web-app-availability). + +- Log Analytics (Operations Management Suite): Enables the routing of Activity and Diagnostic Logs to Log Analytics. Operations Management Suite allows metric, log, and other alert types. + +>[!Note] +> For more information, see Alerts in [Log Analytics](https://docs.microsoft.com/azure/log-analytics/log-analytics-alerts). + +- Azure Monitor: Enables alerts based on both metric values and activity log events. You can use the [Azure Monitor REST API](https://msdn.microsoft.com/library/dn931943.aspx) to manage alerts. + +>[!Note] +> For more information, see [Using the Azure portal, PowerShell, or the command-line interface to create alerts](https://docs.microsoft.com/azure/monitoring-and-diagnostics/insights-alerts-portal). + +### Monitoring + +Performance issues in your cloud app can impact your business. With multiple interconnected components and frequent releases, degradations can happen at any time. And if you're developing an app, your users usually discover issues that you didn't find in testing. You should know about these issues immediately, and have tools for diagnosing and fixing the problems. Microsoft Azure has a range of tools for identifying these problems. + +**How do I monitor my Azure cloud apps?** + +There is a range of tools for monitoring Azure applications and services. Some of their features overlap. This is partly for historical reasons and partly due to the blurring between development and operation of an application. + +Here are the principal tools: + +- **Azure Monitor** is basic tool for monitoring services running on Azure. It gives you infrastructure-level data about the throughput of a service and the surrounding environment. If you are managing your apps all in Azure, deciding whether to scale up or down resources, then Azure Monitor gives you what you use to start. + +- **Application Insights** can be used for development and as a production monitoring solution. It works by installing a package into your app, and so gives you a more internal view of what's going on. Its data includes response times of dependencies, exception traces, debugging snapshots, execution profiles. It provides powerful smart tools for analyzing all this telemetry both to help you debug an app and to help you understand what users are doing with it. You can tell whether a spike in response times is due to something in an app, or some external resourcing issue. If you use Visual Studio and the app is at fault, you can be taken right to the problem line(s) of code so you can fix it. + +- **Log Analytics** is for those who need to tune performance and plan maintenance on applications running in production. It is based in Azure. It collects and aggregates data from many sources, though with a delay of 10 to 15 minutes. It provides a holistic IT management solution for Azure, on-premises, and third-party cloud-based infrastructure (such as Amazon Web Services). It provides richer tools to analyze data across more sources, allows complex queries across all logs, and can proactively alert on specified conditions. You can even collect custom data into its central repository so can query and visualize it. + +- **System Center Operations Manager (SCOM)** is for managing and monitoring large cloud installations. You might be already familiar with it as a management tool for on-premises Windows Sever and Hyper-V based-clouds, but it can also integrate with and manage Azure apps. Among other things, it can install Application Insights on existing live apps. If an app goes down, it tells you in seconds. + + +## Next steps + +- [Best practices for creating Azure Resource Manager templates](https://docs.microsoft.com/azure/azure-resource-manager/resource-manager-template-best-practices). + +- [Examples of implementing Azure subscription governance](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-subscription-examples). + +- [Microsoft Azure Government](https://docs.microsoft.com/azure/azure-government/). diff --git a/articles/security/media/governance-in-azure/security-governance-in-azure-fig1.png b/articles/security/media/governance-in-azure/security-governance-in-azure-fig1.png new file mode 100644 index 0000000000000..75fa5021cab1e Binary files /dev/null and b/articles/security/media/governance-in-azure/security-governance-in-azure-fig1.png differ diff --git a/articles/security/media/governance-in-azure/security-governance-in-azure-fig10.png b/articles/security/media/governance-in-azure/security-governance-in-azure-fig10.png new file mode 100644 index 0000000000000..4a0bce5a4fff1 Binary files /dev/null and b/articles/security/media/governance-in-azure/security-governance-in-azure-fig10.png differ diff --git a/articles/security/media/governance-in-azure/security-governance-in-azure-fig2.png b/articles/security/media/governance-in-azure/security-governance-in-azure-fig2.png new file mode 100644 index 0000000000000..d13654bffe016 Binary files /dev/null and b/articles/security/media/governance-in-azure/security-governance-in-azure-fig2.png differ diff --git a/articles/security/media/governance-in-azure/security-governance-in-azure-fig3.png b/articles/security/media/governance-in-azure/security-governance-in-azure-fig3.png new file mode 100644 index 0000000000000..037e4012c92c5 Binary files /dev/null and b/articles/security/media/governance-in-azure/security-governance-in-azure-fig3.png differ diff --git a/articles/security/media/governance-in-azure/security-governance-in-azure-fig4.png b/articles/security/media/governance-in-azure/security-governance-in-azure-fig4.png new file mode 100644 index 0000000000000..5affc5eac6185 Binary files /dev/null and b/articles/security/media/governance-in-azure/security-governance-in-azure-fig4.png differ diff --git a/articles/security/media/governance-in-azure/security-governance-in-azure-fig5.png b/articles/security/media/governance-in-azure/security-governance-in-azure-fig5.png new file mode 100644 index 0000000000000..b630bb1caf687 Binary files /dev/null and b/articles/security/media/governance-in-azure/security-governance-in-azure-fig5.png differ diff --git a/articles/security/media/governance-in-azure/security-governance-in-azure-fig6.png b/articles/security/media/governance-in-azure/security-governance-in-azure-fig6.png new file mode 100644 index 0000000000000..a8fb50cd28595 Binary files /dev/null and b/articles/security/media/governance-in-azure/security-governance-in-azure-fig6.png differ diff --git a/articles/security/media/governance-in-azure/security-governance-in-azure-fig7.png b/articles/security/media/governance-in-azure/security-governance-in-azure-fig7.png new file mode 100644 index 0000000000000..dbc518d3c6683 Binary files /dev/null and b/articles/security/media/governance-in-azure/security-governance-in-azure-fig7.png differ diff --git a/articles/security/media/governance-in-azure/security-governance-in-azure-fig8.png b/articles/security/media/governance-in-azure/security-governance-in-azure-fig8.png new file mode 100644 index 0000000000000..1b7ea1a6e707a Binary files /dev/null and b/articles/security/media/governance-in-azure/security-governance-in-azure-fig8.png differ diff --git a/articles/security/media/governance-in-azure/security-governance-in-azure-fig9.JPG b/articles/security/media/governance-in-azure/security-governance-in-azure-fig9.JPG new file mode 100644 index 0000000000000..1b8a5a17b8f02 Binary files /dev/null and b/articles/security/media/governance-in-azure/security-governance-in-azure-fig9.JPG differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroup-Figure2.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroup-Figure2.png deleted file mode 100644 index 757787da0531a..0000000000000 Binary files a/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroup-Figure2.png and /dev/null differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroupConfigure-Figure7.PNG b/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroupConfigure-Figure7.PNG deleted file mode 100644 index 39555efe1b547..0000000000000 Binary files a/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroupConfigure-Figure7.PNG and /dev/null differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroupHub-Figure4.PNG b/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroupHub-Figure4.PNG deleted file mode 100644 index f24865b285644..0000000000000 Binary files a/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroupHub-Figure4.PNG and /dev/null differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroupName-Figure3.PNG b/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroupName-Figure3.PNG deleted file mode 100644 index 68ca07231b550..0000000000000 Binary files a/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroupName-Figure3.PNG and /dev/null differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroupReference-Figure5.PNG b/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroupReference-Figure5.PNG deleted file mode 100644 index 10ce85329b97e..0000000000000 Binary files a/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroupReference-Figure5.PNG and /dev/null differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroupSyncRules-Figure6.PNG b/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroupSyncRules-Figure6.PNG deleted file mode 100644 index e43cc883de0d1..0000000000000 Binary files a/articles/sql-database/media/sql-database-get-started-sql-data-sync/NewSyncGroupSyncRules-Figure6.PNG and /dev/null differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/SQLDatabaseScreen-Figure1.PNG b/articles/sql-database/media/sql-database-get-started-sql-data-sync/SQLDatabaseScreen-Figure1.PNG deleted file mode 100644 index 7ad1cc8243b7c..0000000000000 Binary files a/articles/sql-database/media/sql-database-get-started-sql-data-sync/SQLDatabaseScreen-Figure1.PNG and /dev/null differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-agent-adddb.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-agent-adddb.png new file mode 100644 index 0000000000000..5e6404b465b53 Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-agent-adddb.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-agent-dbadded.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-agent-dbadded.png new file mode 100644 index 0000000000000..db0dd1b93d1b7 Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-agent-dbadded.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-agent-enterkey.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-agent-enterkey.png new file mode 100644 index 0000000000000..d293b85131e93 Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-agent-enterkey.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-choosegateway.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-choosegateway.png new file mode 100644 index 0000000000000..8564ed0a17811 Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-choosegateway.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-clientagent.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-clientagent.png new file mode 100644 index 0000000000000..927297533f6a6 Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-clientagent.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-conflictres.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-conflictres.png new file mode 100644 index 0000000000000..56f46e4d644c7 Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-conflictres.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-hubadded.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-hubadded.png new file mode 100644 index 0000000000000..d0b092c82566a Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-hubadded.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-memberadded.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-memberadded.png new file mode 100644 index 0000000000000..bfae10f20ba61 Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-memberadded.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-memberadding.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-memberadding.png new file mode 100644 index 0000000000000..b96349fe1e13a Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-memberadding.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-newsyncgroup.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-newsyncgroup.png new file mode 100644 index 0000000000000..d0908d8d5fa68 Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-newsyncgroup.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-onpremadded.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-onpremadded.png new file mode 100644 index 0000000000000..e587ffc45c6b9 Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-onpremadded.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-properties.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-properties.png new file mode 100644 index 0000000000000..e26608899e0d6 Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-properties.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-selectdb.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-selectdb.png new file mode 100644 index 0000000000000..2fa3f1d32657c Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-selectdb.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-selectsyncagent.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-selectsyncagent.png new file mode 100644 index 0000000000000..bab9bd70ad51f Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-selectsyncagent.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-sqldbs.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-sqldbs.png new file mode 100644 index 0000000000000..fefdb4994e93f Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-sqldbs.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-syncfreq.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-syncfreq.png new file mode 100644 index 0000000000000..5f12c473fea41 Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-syncfreq.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-tables.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-tables.png new file mode 100644 index 0000000000000..ed6e5d84a9e57 Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-tables.png differ diff --git a/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-tables2.png b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-tables2.png new file mode 100644 index 0000000000000..eefa1ecc5a0c3 Binary files /dev/null and b/articles/sql-database/media/sql-database-get-started-sql-data-sync/datasync-preview-tables2.png differ diff --git a/articles/sql-database/scripts/sql-database-create-and-configure-database-cli.md b/articles/sql-database/scripts/sql-database-create-and-configure-database-cli.md index 9b1b1730aa72b..0a58231e740dd 100644 --- a/articles/sql-database/scripts/sql-database-create-and-configure-database-cli.md +++ b/articles/sql-database/scripts/sql-database-create-and-configure-database-cli.md @@ -37,7 +37,7 @@ This sample CLI script creates an Azure SQL database and configure a server-leve After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it. -```azurecli +```azurecli-interactive az group delete --name myResourceGroup ``` diff --git a/articles/sql-database/scripts/sql-database-monitor-and-scale-database-cli.md b/articles/sql-database/scripts/sql-database-monitor-and-scale-database-cli.md index 2013716bdab76..4e6ef4756c342 100644 --- a/articles/sql-database/scripts/sql-database-monitor-and-scale-database-cli.md +++ b/articles/sql-database/scripts/sql-database-monitor-and-scale-database-cli.md @@ -31,13 +31,13 @@ This sample CLI script scales a single Azure SQL database to a different perform ## Sample script -[!code-azurecli[main](../../../cli_scripts/sql-database/monitor-and-scale-database/monitor-and-scale-database.sh "Monitor and scale single SQL Database")] +[!code-azurecli-interactive[main](../../../cli_scripts/sql-database/monitor-and-scale-database/monitor-and-scale-database.sh "Monitor and scale single SQL Database")] ## Clean up deployment After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it. -```azurecli +```azurecli-interactive az group delete --name myResourceGroup ``` diff --git a/articles/sql-database/scripts/sql-database-move-database-between-pools-cli.md b/articles/sql-database/scripts/sql-database-move-database-between-pools-cli.md index 81e06d559728a..f26efcf2a813a 100644 --- a/articles/sql-database/scripts/sql-database-move-database-between-pools-cli.md +++ b/articles/sql-database/scripts/sql-database-move-database-between-pools-cli.md @@ -37,7 +37,7 @@ This sample CLI script creates two elastic pools and moves a database from one e After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it. -```azurecli +```azurecli-interactive az group delete --name myResourceGroup ``` diff --git a/articles/sql-database/scripts/sql-database-scale-pool-cli.md b/articles/sql-database/scripts/sql-database-scale-pool-cli.md index 0d57ebf224f82..9f33149799c4a 100644 --- a/articles/sql-database/scripts/sql-database-scale-pool-cli.md +++ b/articles/sql-database/scripts/sql-database-scale-pool-cli.md @@ -37,7 +37,7 @@ This sample CLI script creates elastic pools, moves pooled databases, and change After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it. -```azurecli +```azurecli-interactive az group delete --name myResourceGroup ``` diff --git a/articles/sql-database/sql-database-firewall-configure.md b/articles/sql-database/sql-database-firewall-configure.md index 2950cc02d4272..050976d914eb9 100644 --- a/articles/sql-database/sql-database-firewall-configure.md +++ b/articles/sql-database/sql-database-firewall-configure.md @@ -162,7 +162,7 @@ New-AzureRmSqlServerFirewallRule -ResourceGroupName "myResourceGroup" ` The following example sets a server-level firewall rule using the Azure CLI: -```azurecli +```azurecli-interactive az sql server firewall-rule create --resource-group myResourceGroup --server $servername \ -n AllowYourIp --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.1 ``` diff --git a/articles/sql-database/sql-database-get-started-cli.md b/articles/sql-database/sql-database-get-started-cli.md index 7e4d0c35e556e..ad12a88752873 100644 --- a/articles/sql-database/sql-database-get-started-cli.md +++ b/articles/sql-database/sql-database-get-started-cli.md @@ -33,7 +33,7 @@ This quick start requires the Azure CLI version 2.0.4 or later. Run `az --versio Log in to your Azure subscription with the [az login](/cli/azure/#login) command and follow the on-screen directions. -```azure-cli +```azurecli-interactive az login ``` @@ -41,7 +41,7 @@ az login Define variables for use in the scripts in this quick start. -```azure-cli +```azurecli-interactive # The data center and resource name for your resources export resourcegroupname = myResourceGroup export location = westeurope @@ -61,14 +61,14 @@ export databasename = mySampleDatabase Create an [Azure resource group](../azure-resource-manager/resource-group-overview.md) using the [az group create](/cli/azure/group#create) command. A resource group is a logical container into which Azure resources are deployed and managed as a group. The following example creates a resource group named `myResourceGroup` in the `westeurope` location. -```azurazure-cliecli +```azurecli-interactive az group create --name $resourcegroupname --location $location ``` ## Create a logical server Create an [Azure SQL Database logical server](sql-database-features.md) using the [az sql server create](/cli/azure/sql/server#create) command. A logical server contains a group of databases managed as a group. The following example creates a randomly named server in your resource group with an admin login named `ServerAdmin` and a password of `ChangeYourAdminPassword1`. Replace these pre-defined values as desired. -```azure-cli +```azurecli-interactive az sql server create --name $servername --resource-group $resourcegroupname --location $location \ --admin-user $adminlogin --admin-password $password ``` @@ -77,7 +77,7 @@ az sql server create --name $servername --resource-group $resourcegroupname --lo Create an [Azure SQL Database server-level firewall rule](sql-database-firewall-configure.md) using the [az sql server firewall create](/cli/azure/sql/server/firewall-rule#create) command. A server-level firewall rule allows an external application, such as SQL Server Management Studio or the SQLCMD utility to connect to a SQL database through the SQL Database service firewall. In the following example, the firewall is only opened for other Azure resources. To enable external connectivity, change the IP address to an appropriate address for your environment. To open all IP addresses, use 0.0.0.0 as the starting IP address and 255.255.255.255 as the ending address. -```azure-cli +```azurecli-interactive az sql server firewall-rule create --resource-group $resourcegroupname --server $servername \ -n AllowYourIp --start-ip-address $startip --end-ip-address $endip ``` @@ -90,7 +90,7 @@ az sql server firewall-rule create --resource-group $resourcegroupname --server Create a database with an [S0 performance level](sql-database-service-tiers.md) in the server using the [az sql db create](/cli/azure/sql/db#create) command. The following example creates a database called `mySampleDatabase` and loads the AdventureWorksLT sample data into this database. Replace these predefined values as desired (other quick starts in this collection build upon the values in this quick start). -```azure-cli +```azurecli-interactive az sql db create --resource-group $resourcegroupname --server $servername \ --name $databasename --sample-name AdventureWorksLT --service-objective S0 ``` @@ -103,7 +103,7 @@ Other quick starts in this collection build upon this quick start. > If you plan to continue on to work with subsequent quick starts, do not clean up the resources created in this quick start. If you do not plan to continue, use the following steps to delete all resources created by this quick start in the Azure portal. > -```azurecli +```azurecli-interactive az group delete --name $resourcegroupname ``` diff --git a/articles/sql-database/sql-database-get-started-portal.md b/articles/sql-database/sql-database-get-started-portal.md index f56409932ff4b..10e8d6f400d76 100644 --- a/articles/sql-database/sql-database-get-started-portal.md +++ b/articles/sql-database/sql-database-get-started-portal.md @@ -105,7 +105,6 @@ The SQL Database service creates a firewall at the server-level that prevents ex ![server firewall rule](./media/sql-database-get-started-portal/server-firewall-rule.png) - 3. Click **Add client IP** on the toolbar to add your current IP address to a new firewall rule. A firewall rule can open port 1433 for a single IP address or a range of IP addresses. 4. Click **Save**. A server-level firewall rule is created for your current IP address opening port 1433 on the logical server. @@ -118,6 +117,7 @@ You can now connect to the SQL Database server and its databases using SQL Serve > [!IMPORTANT] > By default, access through the SQL Database firewall is enabled for all Azure services. Click **OFF** on this page to disable for all Azure services. +> ## Query the SQL database @@ -137,7 +137,7 @@ Now that you have created a sample database in Azure, let’s use the built-in q 5. After you are authenticated, type the following query in the query editor pane. - ``` + ```sql SELECT TOP 20 pc.Name as CategoryName, p.name as ProductName FROM SalesLT.ProductCategory pc JOIN SalesLT.Product p diff --git a/articles/sql-database/sql-database-get-started-sql-data-sync.md b/articles/sql-database/sql-database-get-started-sql-data-sync.md index ad9bb9502449a..48d60c4d3cada 100644 --- a/articles/sql-database/sql-database-get-started-sql-data-sync.md +++ b/articles/sql-database/sql-database-get-started-sql-data-sync.md @@ -1,9 +1,9 @@ --- title: Getting started with Azure SQL Data Sync (Preview) | Microsoft Docs -description: This tutorial helps you get started with the Azure SQL Data Sync (Preview). +description: This tutorial helps you get started with Azure SQL Data Sync (Preview). services: sql-database documentationcenter: '' -author: dearandyxu +author: douglaslms manager: jhubbard editor: '' @@ -15,164 +15,177 @@ ms.tgt_pltfrm: na ms.devlang: na ms.topic: article ms.date: 07/11/2016 -ms.author: jhubbard +ms.author: douglasl --- # Getting Started with Azure SQL Data Sync (Preview) -In this tutorial, you learn the fundamentals of Azure SQL Data Sync using the Azure Classic Portal. +In this tutorial, you learn how to set up Azure SQL Data Sync. -This tutorial assumes minimal prior experience with SQL Server and Azure SQL Database. In this tutorial, you create a hybrid (SQL Server and SQL Database instances) sync group fully configured and synchronizing on the schedule you set. +> [!IMPORTANT] +> The Data Sync service update will be available for selected existing Data Sync customers starting June 1. It will be available for all customers by June 15. [Email us](mailto:DataSyncMigration@microsoft.com) with your subscription id for early access. For information about the original Data Sync service, see the [technical documentation for the original service](http://download.microsoft.com/download/4/E/3/4E394315-A4CB-4C59-9696-B25215A19CEF/SQL_Data_Sync_Preview.pdf). + +This tutorial assumes that you have at least some prior experience with SQL Server and Azure SQL Database. In this tutorial, you create a hybrid sync group that contains SQL Server and SQL Database instances. The new sync group is fully configured and synchronizing on the schedule you set. > [!NOTE] -> The complete technical documentation set for Azure SQL Data Sync, formerly located on MSDN, is available as a .pdf. Download it [here](http://download.microsoft.com/download/4/E/3/4E394315-A4CB-4C59-9696-B25215A19CEF/SQL_Data_Sync_Preview.pdf). -> -> +> The complete technical documentation set for Azure SQL Data Sync, formerly located on MSDN, is available as a .pdf. Download it [here](https://github.com/Microsoft/sql-server-samples/raw/master/samples/features/sql-data-sync/Data_Sync_Preview_full_documentation.pdf?raw=true). -## Step 1: Connect to the Azure SQL Database -1. Sign in to the [Classic Portal](http://manage.windowsazure.com). -2. Click **SQL DATABASES** in the left pane. -3. Click **SYNC** at the bottom of the page. When you click SYNC, a list appears showing the things you can add - **New Sync Group** and **New Sync Agent**. -4. To launch the New SQL Data Sync Agent wizard, click **New Sync Agent**. -5. If you haven't added an agent before, **click download it here**. +## Step 1 - Create sync group - ![Image1](./media/sql-database-get-started-sql-data-sync/SQLDatabaseScreen-Figure1.PNG) +### Locate the Data Sync settings -## Step 2: Add a Client Agent -This step is required only if you are going to have an on-premises SQL Server database included in your sync group. -Skip to Step 4 if your sync group has only SQL Database instances. +1. In your browser, navigate to the Azure portal. - +2. In the portal, locate your SQL databases from your Dashboard or from the SQL Databases icon on the toolbar. -### Step 2a: Install the required software -Be sure that you have the following installed on the computer where you install the Client Agent. + ![List of Azure SQL databases](media/sql-database-get-started-sql-data-sync/datasync-preview-sqldbs.png) -* **.NET Framework 4.0** +3. On the **SQL databases** blade, select the existing SQL database that you want to use as the hub database for Data Sync. The SQL database blade opens. - Install .NET Framework 4.0 from [here](http://go.microsoft.com/fwlink/?linkid=205836). -* **Microsoft SQL Server 2008 R2 SP1 System CLR Types (x86)** +4. On the SQL database blade for the selected database, select **Sync to other databases**. The Data Sync blade opens. - Install the Microsoft SQL Server 2008 R2 SP1 System CLR Types (x86) from [here](http://www.microsoft.com/download/en/details.aspx?id=26728) -* **Microsoft SQL Server 2008 R2 SP1 Shared Management Objects (x86)** + ![Sync to other databases option](media/sql-database-get-started-sql-data-sync/datasync-preview-newsyncgroup.png) - Install the Microsoft SQL Server 2008 R2 SP1 Shared Management Objects (x86) from [here](http://www.microsoft.com/download/en/details.aspx?id=26728) +### Create a new Sync Group - +1. On the Data Sync blade, select **New Sync Group**. The **New sync group** blade opens with Step 1, **Create sync group**, highlighted. The **Create Data Sync Group** blade also opens. -### Step 2b: Install a new Client Agent -Follow the instructions in [Install a Client Agent (SQL Data Sync)](http://download.microsoft.com/download/4/E/3/4E394315-A4CB-4C59-9696-B25215A19CEF/SQL_Data_Sync_Preview.pdf) to install the agent. +2. On the **Create Data Sync Group** blade, do the following things: - + 1. In the **Sync Group Name** field, enter a name for the new sync group. -### Step 2c: Finish the New SQL Data Sync Agent wizard -1. Return to the New SQL Data Sync Agent wizard. -2. Give the agent a meaningful name. -3. From the dropdown, select the **REGION** (data center) to host this agent. -4. From the dropdown, select the **SUBSCRIPTION** to host this agent. -5. Click the right-arrow. + 2. In the **Sync Metadata Database** section, choose whether to create a new database (recommended) or to use an existing database. -## Step 3: Register a SQL Server database with the Client Agent -After the Client Agent is installed, register every on-premises SQL Server database that you intend to include in a sync group with the agent. -To register a database with the agent, follow the instructions at [Register a SQL Server Database with a Client Agent](http://download.microsoft.com/download/4/E/3/4E394315-A4CB-4C59-9696-B25215A19CEF/SQL_Data_Sync_Preview.pdf). + If you chose **New database**, select **Create new database.** The **SQL Database** blade opens. On the **SQL Database** blade, name and configure the new database. Then select **OK**. -## Step 4: Create a sync group - + If you chose **Use existing database**, select the database from the list. -### Step 4a: Start the New Sync Group wizard -1. Return to the [Classic Portal](http://manage.windowsazure.com). -2. Click **SQL DATABASES**. -3. Click **ADD SYNC** at the bottom of the page then select New Sync Group from the drawer. + 3. In the **Automatic Sync** section, first select **On** or **Off**. - ![Image2](./media/sql-database-get-started-sql-data-sync/NewSyncGroup-Figure2.png) + If you chose **On**, in the **Sync Frequency** section, enter a number and select Seconds, Minutes, Hours, or Days. - + ![Specify sync frequency](media/sql-database-get-started-sql-data-sync/datasync-preview-syncfreq.png) -### Step 4b: Enter the basic settings -1. Enter a meaningful name for the sync group. -2. From the dropdown, select the **REGION** (Data Center) to host this sync group. -3. Click the right-arrow. + 4. In the **Conflict Resolution** section, select "Hub wins" or "Member wins." - ![Image3](./media/sql-database-get-started-sql-data-sync/NewSyncGroupName-Figure3.PNG) + ![Specify how conflicts are resolved](media/sql-database-get-started-sql-data-sync/datasync-preview-conflictres.png) - + 5. Select **OK** and wait for the new sync group to be created and deployed. -### Step 4c: Define the sync hub -1. From the dropdown, select the SQL Database instance to serve as the sync group hub. -2. Enter the credentials for this SQL Database instance - **HUB USERNAME** and **HUB PASSWORD**. -3. Wait for SQL Data Sync to confirm the USERNAME and PASSWORD. You will see a green check mark appear to the right of the PASSWORD when the credentials are confirmed. -4. From the dropdown, select the **CONFLICT RESOLUTION** policy. +## Step 2 - Add sync members - **Hub Wins** - any change written to the hub database write to the reference databases, overwriting changes in the same reference database record. Functionally, this means that the first change written to the hub propagates to the other databases. +After the new sync group is created and deployed, Step 2, **Add sync members**, is highlighted in the **New sync group** blade. - **Client Wins** - changes written to the hub are overwritten by changes in reference databases. Functionally, this means that the last change written to the hub is the one kept and propagated to the other databases. +In the **Hub Database** section, enter the existing credentials for the SQL Database server on which the hub database is located. Don't enter *new* credentials in this section. -1. Click the right-arrow. +![Hub database has been added to sync group](media/sql-database-get-started-sql-data-sync/datasync-preview-hubadded.png) - ![Image4](./media/sql-database-get-started-sql-data-sync/NewSyncGroupHub-Figure4.PNG) +## Add an Azure SQL Database - +In the **Member Database** section, optionally add an Azure SQL Database to the sync group by selecting **Add an Azure Database**. The **Configure Azure Database** blade opens. -### Step 4d: Add a reference database -Repeat this step for each additional database you want to add to the sync group. +On the **Configure Azure Database** blade, do the following things: -1. From the dropdown, select the database to add. +1. In the **Sync Member Name** field, provide a name for the new sync member. This name is distinct from the name of the database itself. - Databases in the dropdown include both SQL Server databases that have been registered with the agent and SQL Database instances. -2. Enter credentials for this database - **USERNAME** and **PASSWORD**. -3. From the dropdown, select the **SYNC DIRECTION** for this database. +2. In the **Subscription** field, select the associated Azure subscription for billing purposes. - **Bi-directional** - changes in the reference database are written to the hub database, and changes to the hub database are written to the reference database. +3. In the **Azure SQL Server** field, select the existing SQL database server. - **Sync from the Hub** - The database receives updates from the Hub. It does not send changes to the Hub. +4. In the **Azure SQL Database** field, select the existing SQL database. - **Sync to the Hub** - The database sends updates to the Hub. Changes in the Hub are not written to this database. -4. To finish creating the sync group, click the check mark in the lower right of the wizard. Wait for the SQL Data Sync to confirm the credentials. A green check indicates that the credentials are confirmed. -5. Click the check mark a second time. This returns you to the **SYNC** page under SQL Databases. This sync group is now listed with your other sync groups and agents. +5. In the **Sync Directions** field, select Bi-directional Sync, To the Hub, or From the Hub. - ![Image5](./media/sql-database-get-started-sql-data-sync/NewSyncGroupReference-Figure5.PNG) + ![Adding a new SQL Database sync member](media/sql-database-get-started-sql-data-sync/datasync-preview-memberadding.png) -## Step 5: Define the data to sync -Azure SQL Data Sync allows you to select tables and columns to synchronize. If you also want to filter a column so that only rows with specific values (such as, Age>=65) synchronize, use the SQL Data Sync portal at Azure and the documentation at Select the Tables, Columns, and Rows to Synchronize to define the data to sync. +6. In the **Username** and **Password** fields, enter the existing credentials for the SQL Database server on which the member database is located. Don't enter *new* credentials in this section. -1. Return to the [Classic Portal](http://manage.windowsazure.com). -2. Click **SQL DATABASES**. -3. Click the **SYNC** tab. -4. Click the name of this sync group. -5. Click the **SYNC RULES** tab. -6. Select the database you want to provide the sync group schema. -7. Click the right-arrow. -8. Click **REFRESH SCHEMA**. -9. For each table in the database, select the columns to include in the synchronizations. - * Columns with unsupported data types cannot be selected. - * If no columns in a table are selected, the table is not included in the sync group. - * To select/unselect all the tables, click SELECT at the bottom of the screen. -10. Click **SAVE**, then wait for the sync group to finish provisioning. -11. To return to the Data Sync landing page, click the back-arrow in the upper left of the screen (above the sync group's name). +7. Select **OK** and wait for the new sync member to be created and deployed. - ![Image6](./media/sql-database-get-started-sql-data-sync/NewSyncGroupSyncRules-Figure6.PNG) + ![New SQL Database sync member has been added](media/sql-database-get-started-sql-data-sync/datasync-preview-memberadded.png) -## Step 6: Configure your sync group -You can always synchronize a sync group by clicking SYNC at the bottom of the Data Sync landing page. -To synchronize on a schedule, you configure the sync group. +## Add an on-premises SQL Server database -1. Return to the [Classic Portal](http://manage.windowsazure.com). -2. Click **SQL DATABASES**. -3. Click the **SYNC** tab. -4. Click the name of this sync group. -5. Click the **CONFIGURE** tab. -6. **AUTOMATIC SYNC** - * To configure the sync group to sync on a set frequency, click **ON**. You can still sync on demand by clicking SYNC. - * Click **OFF** to configure the sync group to sync only when you click SYNC. -7. **SYNC FREQUENCY** - * If AUTOMATIC SYNC is ON, set the synchronization frequency. The frequency must be between 5 Minutes and 1 Month. -8. Click **SAVE**. +In the **Member Database** section, optionally add an on-premises SQL Server to the sync group by selecting **Add an On-Premises Database**. The **Configure On-Premises** blade opens. -![Image7](./media/sql-database-get-started-sql-data-sync/NewSyncGroupConfigure-Figure7.PNG) +On the **Configure On-Premises** blade, do the following things: -Congratulations. You have created a sync group that includes both a SQL Database instance and a SQL Server database. +1. Select **Choose the Sync Agent Gateway**. The **Select Sync Agent** blade opens. + + ![Choose the sync agent gateway](media/sql-database-get-started-sql-data-sync/datasync-preview-choosegateway.png) + +2. On the **Choose the Sync Agent Gateway** blade, choose whether to use an existing agent or create a new agent. + + If you chose **Existing agents**, select the existing agent from the list. + + If you chose **Create a new agent**, do the following things: + + 1. Download the client sync agent software from the link provided and install it on the computer where the SQL Server is located. + + 2. Enter a name for the agent. + + 3. Select **Create and Generate Key**. + + 4. Copy the agent key to the clipboard. + + ![Creating a new sync agent](media/sql-database-get-started-sql-data-sync/datasync-preview-selectsyncagent.png) + + 5. Select **OK** to close the **Select Sync Agent** blade. + + 6. On the SQL Server computer, locate and run the Client Sync Agent app. + + ![The data sync client agent app](media/sql-database-get-started-sql-data-sync/datasync-preview-clientagent.png) + + 7. In the sync agent app, select **Submit Agent Key**. The **Sync Metadata Database Configuration** dialog box opens. + + 8. In the **Sync Metadata Database Configuration** dialog box, paste in the agent key copied from the Azure portal. Also provide the existing credentials for the Azure SQL Database server on which the metadata database is located. (If you created a new metadata database, this database is on the same server as the hub database.) Select **OK** and wait for the configuration to finish. + + ![Enter the agent key and server credentials](media/sql-database-get-started-sql-data-sync/datasync-preview-agent-enterkey.png) + + > [!NOTE] + > If you get a firewall error at this point, you have to create a firewall rule on Azure to allow incoming traffic from the SQL Server computer. You can create the rule manually in the portal, but you may find it easier to create it in SQL Server Management Studio (SSMS). In SSMS, try to connect to the hub database on Azure. Enter its name as \.database.windows.net. Follow the steps in the dialog box to configure the Azure firewall rule. Then return to the Client Sync Agent app. + + 9. In the Client Sync Agent app, click **Register** to register a SQL Server database with the agent. The **SQL Server Configuration** dialog box opens. + + ![Add and configure a SQL Server database](media/sql-database-get-started-sql-data-sync/datasync-preview-agent-adddb.png) + + 10. In the **SQL Server Configuration** dialog box, choose whether to connect by using SQL Server authentication or Windows authentication. If you chose SQL Server authentication, enter the existing credentials. Provide the SQL Server name and the name of the database that you want to sync. Select **Test connection** to test your settings. Then select **Save**. The registered database appears in the list. + + ![SQL Server database is now registered](media/sql-database-get-started-sql-data-sync/datasync-preview-agent-dbadded.png) + + 11. You can now close the Client Sync Agent app. + + 12. In the portal, on the **Configure On-Premises** blade, select **Select the Database.** The **Select Database** blade opens. + + 13. On the **Select Database** blade, in the **Sync Member Name** field, provide a name for the new sync member. This name is distinct from the name of the database itself. Select the database from the list. In the **Sync Directions** field, select Bi-directional Sync, To the Hub, or From the Hub. + + ![Select the on premises database](media/sql-database-get-started-sql-data-sync/datasync-preview-selectdb.png) + + 14. Select **OK** to close the **Select Database** blade. Then select **OK** to close the **Configure On-Premises** blade and wait for the new sync member to be created and deployed. Finally, click **OK** to close the **Select sync members** blade. + + ![On premises database added to sync group](media/sql-database-get-started-sql-data-sync/datasync-preview-onpremadded.png) + +## Step 3 - Configure sync group + +After the new sync group members are created and deployed, Step 3, **Configure sync group**, is highlighted in the **New sync group** blade. + +1. On the **Tables** blade, select a database from the list of sync group members, and then select **Refresh schema**. + +2. From the list of available tables, select the tables that you want to sync. + + ![Select tables to sync](media/sql-database-get-started-sql-data-sync/datasync-preview-tables.png) + +3. By default, all columns in the table are selected. If you don't want to sync all the columns, disable the checkbox for the columns that you don't want to sync. + + ![Select fields to sync](media/sql-database-get-started-sql-data-sync/datasync-preview-tables2.png) + +4. Finally, select **Save**. ## Next steps -For additional information on SQL Database and SQL Data Sync see: +Congratulations. You have created a sync group that includes both a SQL Database instance and a SQL Server database. + +For more info about SQL Database and SQL Data Sync, see: -* [Download the complete SQL Data Sync technical documentation](http://download.microsoft.com/download/4/E/3/4E394315-A4CB-4C59-9696-B25215A19CEF/SQL_Data_Sync_Preview.pdf) -* [SQL Database Overview](sql-database-technical-overview.md) -* [Database Lifecycle Management](https://msdn.microsoft.com/library/jj907294.aspx) +- [Download the complete SQL Data Sync technical documentation](https://github.com/Microsoft/sql-server-samples/raw/master/samples/features/sql-data-sync/Data_Sync_Preview_full_documentation.pdf?raw=true) +- [Download the SQL Data Sync REST API documentation](https://github.com/Microsoft/sql-server-samples/raw/master/samples/features/sql-data-sync/Data_Sync_Preview_REST_API.pdf?raw=true) +- [SQL Database Overview](sql-database-technical-overview.md) +- [Database Lifecycle Management](https://msdn.microsoft.com/library/jj907294.aspx) diff --git a/articles/sql-database/sql-database-metrics-diag-logging.md b/articles/sql-database/sql-database-metrics-diag-logging.md index fc00a2cd35207..8a8ab6efea167 100644 --- a/articles/sql-database/sql-database-metrics-diag-logging.md +++ b/articles/sql-database/sql-database-metrics-diag-logging.md @@ -19,7 +19,7 @@ ms.author: vvasic --- # Azure SQL Database metrics and diagnostics logging -Azure SQL Database can emit metrics and diagnostic logs for easier monitoring. You can configure Azure SQL Database to store resource usage, workers and sessions and connectivity into one of these Azure resources: +Azure SQL Database can emit metrics and diagnostic logs for easier monitoring. You can configure Azure SQL Database to store resource usage, workers and sessions, and connectivity into one of these Azure resources: - **Azure Storage**: For archiving vast amounts of telemetry for a small price - **Azure Event Hub**: For integrating Azure SQL Database telemetry with your custom monitoring solution or hot pipelines - **Azure Log Analytics**: For out of the box monitoring solution with reporting, alerting, and mitigating capabilities @@ -98,7 +98,7 @@ To enable metrics and diagnostics logging using the Azure CLI, use the following - To enable storage of Diagnostic Logs in a Storage Account, use this command: - ```azurecli + ```azurecli-interactive azure insights diagnostic set --resourceId --storageId --enabled true ``` @@ -106,19 +106,19 @@ To enable metrics and diagnostics logging using the Azure CLI, use the following - To enable streaming of Diagnostic Logs to an Event Hub, use this command: - ```azurecli + ```azurecli-interactive azure insights diagnostic set --resourceId --serviceBusRuleId --enabled true ``` The Service Bus Rule ID is a string with this format: - ```azurecli + ```azurecli-interactive {service bus resource ID}/authorizationrules/{key name} ``` - To enable sending of Diagnostic Logs to a Log Analytics workspace, use this command: - ```azurecli + ```azurecli-interactive azure insights diagnostic set --resourceId --workspaceId --enabled true ``` diff --git a/articles/sql-database/sql-database-security-tutorial.md b/articles/sql-database/sql-database-security-tutorial.md index 49f5029a5bd5d..86e82397bc14c 100644 --- a/articles/sql-database/sql-database-security-tutorial.md +++ b/articles/sql-database/sql-database-security-tutorial.md @@ -108,7 +108,7 @@ Follow these steps to create a user using SQL Authentication: 3. In the query window, enter the following query: ```sql - CREATE USER 'ApplicationUserUser' WITH PASSWORD = 'strong_password'; + CREATE USER ApplicationUserUse' WITH PASSWORD = 'YourStrongPassword1'; ``` 4. On the toolbar, click **Execute** to create the user. @@ -116,8 +116,8 @@ Follow these steps to create a user using SQL Authentication: 5. By default, the user can connect to the database, but has no permissions to read or write data. To grant these permissions to the newly created user, execute the following two commands in a new query window ```sql - ALTER ROLE db_datareader ADD MEMBER 'ApplicationUserUser'; - ALTER ROLE db_datawriter ADD MEMBER 'ApplicationUserUser'; + ALTER ROLE db_datareader ADD MEMBER ApplicationUserUser; + ALTER ROLE db_datawriter ADD MEMBER ApplicationUserUser; ``` It is best practice to create these non-administrator accounts at the database level to connect to your database unless you need to execute administrator tasks like creating new users. Please review the [Azure Active Directory tutorial](./sql-database-aad-authentication-configure.md) on how to authenticate using Azure Active Directory. diff --git a/articles/sql-database/toc.yml b/articles/sql-database/toc.yml index 3cdf204ed02e6..fb736a0e3790b 100644 --- a/articles/sql-database/toc.yml +++ b/articles/sql-database/toc.yml @@ -13,7 +13,7 @@ href: sql-database-get-started-cli.md - name: Create DB - PowerShell href: sql-database-get-started-powershell.md - - name: Connect and query + - name: Connect & query items: - name: SSMS href: sql-database-connect-query-ssms.md @@ -53,18 +53,16 @@ href: sql-database-powershell-samples.md - name: Concepts items: - - name: DBs and servers + - name: Databases & servers items: - name: Databases href: sql-database-overview.md - name: Servers href: sql-database-server-overview.md - - name: Elastic pools - href: sql-database-elastic-pool.md - - name: Resources - items: - name: Service tiers href: sql-database-service-tiers.md + - name: Elastic pools + href: sql-database-elastic-pool.md - name: DTUs and eDTUs href: sql-database-what-is-a-dtu.md - name: DTU benchmark @@ -77,38 +75,6 @@ href: sql-database-features.md - name: Tools href: sql-database-manage-overview.md - - name: Partition data - items: - - name: Sharded databases - href: sql-database-elastic-scale-introduction.md - - name: Elastic client library - href: sql-database-elastic-database-client-library.md - - name: Shard maps - href: sql-database-elastic-scale-shard-map-management.md - - name: Query routing - href: sql-database-elastic-scale-data-dependent-routing.md - - name: Manage credentials - href: sql-database-elastic-scale-manage-credentials.md - - name: Shard querying - href: sql-database-elastic-scale-multishard-querying.md - - name: Elastic tools - href: sql-database-elastic-scale-glossary.md - - name: Move sharded data - href: sql-database-elastic-scale-overview-split-and-merge.md - - name: Elastic tools FAQ - href: sql-database-elastic-scale-faq.md - - name: Manage multiple DBs - items: - - name: Elastic queries - href: sql-database-elastic-query-overview.md - - name: Horizontal data - href: sql-database-elastic-query-horizontal-partitioning.md - - name: Vertical data - href: sql-database-elastic-query-vertical-partitioning.md - - name: Transactions - href: sql-database-elastic-transactions-overview.md - - name: Elastic jobs - href: sql-database-elastic-jobs-overview.md - name: Security items: - name: Overview @@ -145,45 +111,21 @@ href: sql-database-long-term-retention.md - name: Database recovery href: sql-database-recovery-using-backups.md - - name: Auto failover and geo-replication + - name: Failover groups href: sql-database-geo-replication-overview.md - - name: Logins - href: sql-database-geo-replication-security-config.md - - name: App design - href: sql-database-designing-cloud-solutions-for-disaster-recovery.md - - name: Elastic pools - href: sql-database-disaster-recovery-strategies-for-applications-with-elastic-pool.md - - name: App upgrades - href: sql-database-manage-application-rolling-upgrade.md - - name: Database development + - name: Load & move data items: - - name: Overview - href: sql-database-develop-overview.md - - name: Connectivity - href: sql-database-libraries.md - - name: JSON data - href: sql-database-json-features.md - - name: In-memory - href: sql-database-in-memory.md - - name: Temporal tables - href: sql-database-temporal-tables.md - - name: Retention policies - href: sql-database-temporal-tables-retention-policy.md - - name: Database migration - items: - - name: SQL Server DB + - name: Migrate SQL Server DB href: sql-database-cloud-migrate.md - name: T-SQL changes href: sql-database-transact-sql-information.md - - name: Data movement - items: - name: Copy a DB href: sql-database-copy.md - name: Import a DB href: sql-database-import.md - name: Export a DB href: sql-database-export.md - - name: Monitor and tune + - name: Monitor & manage items: - name: Single databases href: sql-database-single-database-monitor.md @@ -201,51 +143,107 @@ href: sql-database-xevent-db-diff-from-svr.md - name: Compatibility levels href: sql-database-compatibility-level-query-performance-130.md + - name: Scale out apps + items: + - name: SaaS design patterns + href: sql-database-design-patterns-multi-tenancy-saas-applications.md + - name: Sharded databases + href: sql-database-elastic-scale-introduction.md + - name: Elastic client library + href: sql-database-elastic-database-client-library.md + - name: Shard maps + href: sql-database-elastic-scale-shard-map-management.md + - name: Query routing + href: sql-database-elastic-scale-data-dependent-routing.md + - name: Manage credentials + href: sql-database-elastic-scale-manage-credentials.md + - name: Shard querying + href: sql-database-elastic-scale-multishard-querying.md + - name: Elastic tools + href: sql-database-elastic-scale-glossary.md + - name: Move sharded data + href: sql-database-elastic-scale-overview-split-and-merge.md + - name: Elastic tools FAQ + href: sql-database-elastic-scale-faq.md + - name: Elastic queries + href: sql-database-elastic-query-overview.md + - name: Horizontal data + href: sql-database-elastic-query-horizontal-partitioning.md + - name: Vertical data + href: sql-database-elastic-query-vertical-partitioning.md + - name: Transactions + href: sql-database-elastic-transactions-overview.md + - name: Elastic jobs + href: sql-database-elastic-jobs-overview.md + - name: Develop databases + items: + - name: JSON data + href: sql-database-json-features.md + - name: In-memory + href: sql-database-in-memory.md + - name: Temporal tables + href: sql-database-temporal-tables.md + - name: Retention policies + href: sql-database-temporal-tables-retention-policy.md + - name: Configure In-Memory + href: sql-database-in-memory-oltp-migration.md + - name: Develop apps + items: + - name: Overview + href: sql-database-develop-overview.md + - name: Connectivity + href: sql-database-libraries.md - name: How-to guides items: - - name: Manage elastic pools + - name: Databases & servers items: - - name: Portal + - name: Elastic pools - portal href: sql-database-elastic-pool-manage-portal.md - - name: PowerShell + - name: Elastic pools - PowerShell href: sql-database-elastic-pool-manage-powershell.md - - name: Transact-SQL + - name: Elastic pools - Transact-SQL href: sql-database-elastic-pool-manage-tsql.md - - name: C # + - name: Elastic pools - C# href: sql-database-elastic-pool-manage-csharp.md - - name: DB Access + - name: Security items: - - name: SQL Server + - name: SQL Server auth tutorial href: sql-database-control-access-sql-authentication-get-started.md - - name: Azure AD + - name: Azure AD auth tutorial href: sql-database-control-access-aad-authentication-get-started.md - - name: Secure data - items: - - name: Azure AD auth + - name: Configure Azure AD auth href: sql-database-aad-authentication-configure.md - - name: Encrypt - cert store + - name: Always encrypted cert store href: sql-database-always-encrypted.md - - name: Encrypt - key vault + - name: Always encrypted key vault href: sql-database-always-encrypted-azure-key-vault.md - name: Configure masking href: sql-database-dynamic-data-masking-get-started-portal.md - - name: Recover single table - href: sql-database-cloud-migrate-restore-single-table-azure-backup.md - - name: Configure vault for backups - href: sql-database-long-term-backup-retention-configure.md - - name: Geo-replicate data + - name: Business continuity items: - - name: Portal + - name: Configure vault - backups + href: sql-database-long-term-backup-retention-configure.md + - name: Configure security + href: sql-database-geo-replication-security-config.md + - name: App design & recovery + href: sql-database-designing-cloud-solutions-for-disaster-recovery.md + - name: App design and pools + href: sql-database-disaster-recovery-strategies-for-applications-with-elastic-pool.md + - name: App design and upgrades + href: sql-database-manage-application-rolling-upgrade.md + - name: Geo-replicate - portal href: sql-database-geo-replication-portal.md - - name: T-SQL - Configure + - name: Geo-replicate- T-SQL - Configure href: sql-database-geo-replication-transact-sql.md - - name: T-SQL - Failover + - name: Geo-replicate - T-SQL - Failover href: sql-database-geo-replication-failover-transact-sql.md - - name: Recover - outage + - name: Recover from outage href: sql-database-disaster-recovery.md - - name: Perform drills + - name: Perform recovery drill href: sql-database-disaster-recovery-drills.md - - name: Move data + - name: Recover single table + href: sql-database-cloud-migrate-restore-single-table-azure-backup.md + - name: Load & move data items: - name: Load data with BCP href: sql-database-load-from-csv-with-bcp.md @@ -253,21 +251,31 @@ href: ../data-factory/data-factory-copy-data-from-azure-blob-storage-to-sql-database.md - name: Sync data href: sql-database-get-started-sql-data-sync.md - - name: Connect applications + - name: Monitor & manage items: - - name: C and C ++ - href: sql-database-develop-cplusplus-simple.md - - name: Excel - href: sql-database-connect-excel.md - - name: Guidance - href: sql-database-connectivity-issues.md - - name: Issues - href: sql-database-troubleshoot-common-connection-issues.md - - name: Create DB with C # - href: sql-database-get-started-csharp.md - - name: Configure In-Memory - href: sql-database-in-memory-oltp-migration.md - - name: Manage multiple DBs + - name: Use Database Advisor + href: sql-database-advisor-portal.md + - name: Use QPI + href: sql-database-performance.md + - name: Evaluate and tune + href: sql-database-troubleshoot-performance.md + - name: Create alerts + href: sql-database-insights-alerts-portal.md + - name: Monitor in-memory + href: sql-database-in-memory-oltp-monitoring.md + - name: Extended events - event file + href: sql-database-xevent-code-event-file.md + - name: Extended events - ring buffer + href: sql-database-xevent-code-ring-buffer.md + - name: Diagnostic logging + href: sql-database-metrics-diag-logging.md + - name: Azure Automation + href: sql-database-manage-automation.md + - name: Azure RemoteApp + href: sql-database-ssms-remoteapp.md + - name: SSMS - MFA + href: sql-database-ssms-mfa-authentication-configure.md + - name: Scale out apps items: - name: Create sharded app href: sql-database-elastic-scale-get-started.md @@ -301,34 +309,18 @@ href: sql-database-elastic-query-getting-started.md - name: Query vertical data href: sql-database-elastic-query-getting-started-vertical.md - - name: Monitor and tune - items: - - name: Use Database Advisor - href: sql-database-advisor-portal.md - - name: Use QPI - href: sql-database-performance.md - - name: Evaluate and tune - href: sql-database-troubleshoot-performance.md - - name: Create alerts - href: sql-database-insights-alerts-portal.md - - name: Monitor in-memory - href: sql-database-in-memory-oltp-monitoring.md - - name: Extended events - event file - href: sql-database-xevent-code-event-file.md - - name: Extended events - ring buffer - href: sql-database-xevent-code-ring-buffer.md - - name: Diagnostic logging - href: sql-database-metrics-diag-logging.md - - name: Manage - items: - - name: Azure Automation - href: sql-database-manage-automation.md - - name: Azure RemoteApp - href: sql-database-ssms-remoteapp.md - - name: SSMS - MFA - href: sql-database-ssms-mfa-authentication-configure.md - name: Develop apps items: + - name: C and C ++ + href: sql-database-develop-cplusplus-simple.md + - name: Excel + href: sql-database-connect-excel.md + - name: Connectivity guidance + href: sql-database-connectivity-issues.md + - name: Connectivity issues + href: sql-database-troubleshoot-common-connection-issues.md + - name: Create DB with C # + href: sql-database-get-started-csharp.md - name: Ports - ADO.NET href: sql-database-develop-direct-route-ports-adonet-v12.md - name: Authenticate App @@ -339,8 +331,6 @@ href: sql-database-elastic-scale-working-with-dapper.md - name: Batching for perf href: sql-database-use-batching-to-improve-performance.md - - name: SaaS app design - href: sql-database-design-patterns-multi-tenancy-saas-applications.md - name: SaaS app security href: sql-database-elastic-tools-multi-tenant-row-level-security.md - name: SaaS app tutorial diff --git a/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-design-overview.md b/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-design-overview.md index 1e47f23b622d9..7be1cf3966cde 100644 --- a/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-design-overview.md +++ b/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-design-overview.md @@ -15,7 +15,7 @@ ms.workload: na ms.tgt_pltfrm: vm-linux ms.devlang: na ms.topic: article -ms.date: 02/13/2017 +ms.date: 06/01/2017 ms.author: negat --- diff --git a/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-start.md b/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-start.md index 6a7bd3151b985..bc35a6c8ea588 100644 --- a/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-start.md +++ b/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-start.md @@ -13,7 +13,7 @@ ms.workload: na ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 2/14/2017 +ms.date: 06/01/2017 ms.author: negat --- diff --git a/articles/virtual-machines/linux/classic/lamp-script.md b/articles/virtual-machines/linux/classic/lamp-script.md index 8528bd54e6ff7..be9a0f4ea9136 100644 --- a/articles/virtual-machines/linux/classic/lamp-script.md +++ b/articles/virtual-machines/linux/classic/lamp-script.md @@ -14,7 +14,7 @@ ms.workload: multiple ms.tgt_pltfrm: linux ms.devlang: na ms.topic: article -ms.date: 09/13/2016 +ms.date: 06/01/2017 ms.author: guybo --- @@ -24,14 +24,14 @@ ms.author: guybo The Microsoft Azure CustomScript Extension for Linux provides a way to customize your virtual machines (VMs) by running arbitrary code written in any scripting language supported by the VM (for example, Python, and Bash). This provides a very flexible way to automate application deployment to multiple machines. -You can deploy the CustomScript Extension using the Azure classic portal, Windows PowerShell, or the Azure Command-Line Interface (Azure CLI). +You can deploy the CustomScript Extension using the Azure portal, Windows PowerShell, or the Azure Command-Line Interface (Azure CLI). In this article we'll use the Azure CLI to deploy a simple LAMP application to an Ubuntu VM created using the classic deployment model. ## Prerequisites For this example, first create two Azure VMs running Ubuntu 14.04 or later. The VMs are called *script-vm* and *lamp-vm*. Use unique names when you create the VMs. One is used to run the CLI commands and one is used to deploy the LAMP app. -You also need an Azure Storage account and a key to access it (you can get this from the Azure classic portal). +You also need an Azure Storage account and a key to access it (you can get this from the Azure portal). If you need help creating Linux VMs on Azure refer to [Create a Virtual Machine Running Linux](createportal.md). diff --git a/articles/virtual-machines/linux/classic/media/optimize-mysql/virtual-machines-linux-optimize-mysql-perf-Disks-option.png b/articles/virtual-machines/linux/classic/media/optimize-mysql/virtual-machines-linux-optimize-mysql-perf-Disks-option.png new file mode 100644 index 0000000000000..5a2ced6eb2f7e Binary files /dev/null and b/articles/virtual-machines/linux/classic/media/optimize-mysql/virtual-machines-linux-optimize-mysql-perf-Disks-option.png differ diff --git a/articles/virtual-machines/linux/classic/media/optimize-mysql/virtual-machines-linux-optimize-mysql-perf-attach-empty-disk.png b/articles/virtual-machines/linux/classic/media/optimize-mysql/virtual-machines-linux-optimize-mysql-perf-attach-empty-disk.png new file mode 100644 index 0000000000000..1c351c973a877 Binary files /dev/null and b/articles/virtual-machines/linux/classic/media/optimize-mysql/virtual-machines-linux-optimize-mysql-perf-attach-empty-disk.png differ diff --git a/articles/virtual-machines/linux/classic/optimize-mysql.md b/articles/virtual-machines/linux/classic/optimize-mysql.md index 2e867d31715a4..fd5f6e1f44f03 100644 --- a/articles/virtual-machines/linux/classic/optimize-mysql.md +++ b/articles/virtual-machines/linux/classic/optimize-mysql.md @@ -14,7 +14,7 @@ ms.workload: infrastructure-services ms.tgt_pltfrm: vm-linux ms.devlang: na ms.topic: article -ms.date: 12/15/2015 +ms.date: 05/31/2017 ms.author: ningk --- @@ -38,27 +38,22 @@ There are limits on how many disks you can add for different virtual machine typ This article assumes you have already created a Linux virtual machine and have MYSQL installed and configured. For more information on getting started, see How to install MySQL on Azure. ### Set up RAID on Azure -The following steps show how to create RAID on Azure by using the Azure classic portal. You can also set up RAID by using Windows PowerShell scripts. +The following steps show how to create RAID on Azure by using the Azure portal. You can also set up RAID by using Windows PowerShell scripts. In this example, we will configure RAID 0 with four disks. #### Add a data disk to your virtual machine -On the virtual machines page of the Azure classic portal, click the virtual machine to which you want to add a data disk. In this example, the virtual machine is mysqlnode1. +In the Azure portal, go to the dashboard and select the virtual machine to which you want to add a data disk. In this example, the virtual machine is mysqlnode1. -![Virtual machines][1] + -On the page for the virtual machine, click **Dashboard**. +Click **Disks** and then click **Attach New**. -![Virtual machine dashboard][2] +![Virtual machines add disk](media/optimize-mysql/virtual-machines-linux-optimize-mysql-perf-Disks-option.png) -In the taskbar, click **Attach**. +Create a new 500 GB disk. Make sure that **Host Cache Preference** is set to **None**. When you're finished, click **OK**. -![Virtual machine taskbar][3] +![Attach empty disk](media/optimize-mysql/virtual-machines-linux-optimize-mysql-perf-attach-empty-disk.png) -And then click **Attach empty disk**. - -![Attach empty disk][4] - -For data disks, the **Host Cache Preference** should be set to **None**. This adds one empty disk into your virtual machine. Repeat this step three more times so that you have four data disks for RAID. @@ -344,3 +339,4 @@ For more detailed [optimization configuration parameters](http://dev.mysql.com/d [12]:media/optimize-mysql/virtual-machines-linux-optimize-mysql-perf-12.png [13]:media/optimize-mysql/virtual-machines-linux-optimize-mysql-perf-13.png [14]:media/optimize-mysql/virtual-machines-linux-optimize-mysql-perf-14.png + diff --git a/articles/virtual-machines/linux/create-upload-openbsd.md b/articles/virtual-machines/linux/create-upload-openbsd.md index 52b87f2523a83..09d85a0ff0eb7 100644 --- a/articles/virtual-machines/linux/create-upload-openbsd.md +++ b/articles/virtual-machines/linux/create-upload-openbsd.md @@ -18,7 +18,7 @@ ms.date: 05/24/2017 ms.author: kyliel --- -# Create and upload a OpenBSD disk image to Azure +# Create and Upload an OpenBSD disk image to Azure This article shows you how to create and upload a virtual hard disk (VHD) that contains the OpenBSD operating system. After you upload it, you can use it as your own image to create a virtual machine (VM) in Azure through Azure CLI. @@ -67,9 +67,8 @@ On the VM where you installed the OpenBSD operating system 6.1, which added Hype 6. The latest release of the Azure agent can always be found on [Github](https://github.com/Azure/WALinuxAgent/releases). Install the agent as follows: ```sh - git clone https://github.com/reyk/WALinuxAgent + git clone https://github.com/Azure/WALinuxAgent cd WALinuxAgent - git checkout waagent-openbsd python setup.py install waagent -register-service ``` @@ -93,11 +92,10 @@ Now you can shut down your VM. ## Prepare the VHD -The VHDX format is not supported in Azure, only **fixed VHD**. You can convert the disk to fixed VHD format using Hyper-V Manager or the Powershell [convert-vhd](https://technet.microsoft.com/itpro/powershell/windows/hyper-v/convert-vhd) cmdlet and [resize-vhd](https://technet.microsoft.com/itpro/powershell/windows/hyper-v/resize-vhd). Examples are as followings. +The VHDX format is not supported in Azure, only **fixed VHD**. You can convert the disk to fixed VHD format using Hyper-V Manager or the Powershell [convert-vhd](https://technet.microsoft.com/itpro/powershell/windows/hyper-v/convert-vhd) cmdlet. An example is as following. ```powershell -Resize-VHD -Path OpenBSD61.vhdx -SizeBytes 20GB -Convert-VHD OpenBSD61.vhdx OpenBSD61.vhd +Convert-VHD OpenBSD61.vhdx OpenBSD61.vhd -VHDType Fixed ``` ## Create storage resources and upload @@ -119,9 +117,9 @@ az storage account create --resource-group myResourceGroup \ To control access to the storage account, obtain the storage key with [az storage account key list](/cli/azure/storage/account/key#list) as follows: ```azurecli -$STORAGE_KEY=$(az storage account keys list \ - ---resource-group myResourceGroup \ - --name mystorageaccount \ +STORAGE_KEY=$(az storage account keys list \ + --resource-group myResourceGroup \ + --account-name mystorageaccount \ --query "[?keyName=='key1'] | [0].value" -o tsv) ``` @@ -173,4 +171,6 @@ ssh azureuser@ ## Next steps -If you want to know more about Hyper-V support on OpenBSD6.1, read [OpenBSD 6.1](https://www.openbsd.org/61.html) and [hyperv.4](http://man.openbsd.org/hyperv.4). \ No newline at end of file +If you want to know more about Hyper-V support on OpenBSD6.1, read [OpenBSD 6.1](https://www.openbsd.org/61.html) and [hyperv.4](http://man.openbsd.org/hyperv.4). + +If you want to create a VM from managed disk, read [az disk](/cli/azure/disk). \ No newline at end of file diff --git a/articles/virtual-machines/linux/intro-on-azure.md b/articles/virtual-machines/linux/intro-on-azure.md index 74687d83fd191..c5c6f67cb148f 100644 --- a/articles/virtual-machines/linux/intro-on-azure.md +++ b/articles/virtual-machines/linux/intro-on-azure.md @@ -14,7 +14,7 @@ ms.workload: infrastructure-services ms.tgt_pltfrm: vm-linux ms.devlang: na ms.topic: article -ms.date: 05/30/2017 +ms.date: 06/01/2017 ms.author: szark --- @@ -22,7 +22,7 @@ ms.author: szark This topic provides an overview of some aspects of using Linux virtual machines in the Azure cloud. Deploying a Linux virtual machine is a straightforward process using an image from the gallery. ## Authentication: Usernames, Passwords and SSH Keys -When creating a Linux virtual machine using the Azure portal, you are asked to provide a username, password or an SSH public key. The choice of a username for deploying a Linux virtual machine on Azure is subject to the following constraint: names of system accounts (UID <100) already present in the virtual machine are not allowed, 'root' for example. +When creating a Linux virtual machine using the Azure portal, you are asked to provide a either username and password or an SSH public key. The choice of a username for deploying a Linux virtual machine on Azure is subject to the following constraint: names of system accounts (UID <100) already present in the virtual machine are not allowed, 'root' for example. * See [Create a Virtual Machine Running Linux](quick-create-cli.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) * See [How to Use SSH with Linux on Azure](mac-create-ssh-keys.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) @@ -65,9 +65,9 @@ The Azure Linux Agent includes functionality to automatically detect this name c ## Virtual Machine Image Capture Azure provides the ability to capture the state of an existing virtual machine into an image that can subsequently be used to deploy additional virtual machine instances. The Azure Linux Agent may be used to rollback some of the customization that was performed during the provisioning process. You may follow the steps below to capture a virtual machine as an image: -1. Run **waagent -deprovision** to undo provisioning customization. Or **waagent -deprovision+user** to optionally, delete the user account specified during provisioning and all associated data. +1. Run **waagent -deprovision** to undo provisioning customization. Or **waagent -deprovision+user** to optionally delete the user account specified during provisioning and all associated data. 2. Shut down/power off the virtual machine. -3. Click *Capture* in the Azure portal or use the Powershell or CLI tools to capture the virtual machine as an image. +3. Click **Capture** in the Azure portal or use the PowerShell or CLI tools to capture the virtual machine as an image. * See: [How to Capture a Linux Virtual Machine to Use as a Template](classic/capture-image.md?toc=%2fazure%2fvirtual-machines%2flinux%2fclassic%2ftoc.json) diff --git a/articles/virtual-machines/linux/python-django-web-app.md b/articles/virtual-machines/linux/python-django-web-app.md index a3582eaa03ada..172d7dcafb556 100644 --- a/articles/virtual-machines/linux/python-django-web-app.md +++ b/articles/virtual-machines/linux/python-django-web-app.md @@ -14,7 +14,7 @@ ms.workload: web ms.tgt_pltfrm: vm-linux ms.devlang: python ms.topic: article -ms.date: 11/17/2015 +ms.date: 05/31/2017 ms.author: huvalo --- @@ -53,7 +53,7 @@ A screenshot of the completed application is below: The Ubuntu Linux VM already comes with Python 2.7 pre-installed, but it doesn't have Apache or django installed. Follow these steps to connect to your VM and install Apache and django. 1. Launch a new **Terminal** window. -2. Enter the following command to connect to the Azure VM. If you didn't create a FQDN, you can connect using the public IP address displayed in the virtual machine summary in the Azure classic portal. +2. Enter the following command to connect to the Azure VM. If you didn't create a FQDN, you can connect using the public IP address displayed in the virtual machine summary in the Azure portal. $ ssh yourusername@yourVmUrl 3. Enter the following commands to install django: diff --git a/articles/virtual-machines/windows/classic/hpcpack-rdma-cluster.md b/articles/virtual-machines/windows/classic/hpcpack-rdma-cluster.md index bce1916ae3c7d..8432cd5c28fa3 100644 --- a/articles/virtual-machines/windows/classic/hpcpack-rdma-cluster.md +++ b/articles/virtual-machines/windows/classic/hpcpack-rdma-cluster.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: vm-windows ms.workload: big-compute -ms.date: 12/29/2016 +ms.date: 06/01/2017 ms.author: danlep --- @@ -60,7 +60,7 @@ in an Azure VM. Configure a certificate to secure the connection between the head node and Azure. For options and procedures, see [Scenarios to Configure the Azure Management Certificate for HPC Pack](http://technet.microsoft.com/library/gg481759.aspx). For test deployments, HPC Pack installs a Default Microsoft HPC Azure Management Certificate you can quickly upload to your Azure subscription. 3. **Create a new cloud service and a storage account** - Use the Azure classic portal to create a cloud service and a storage account for the deployment in a region where the RDMA-capable instances are available. + Use the Azure portal to create a cloud service and a storage account for the deployment in a region where the RDMA-capable instances are available. 4. **Create an Azure node template** Use the Create Node Template Wizard in HPC Cluster Manager. For steps, see [Create an Azure node template](http://technet.microsoft.com/library/gg481758.aspx#BKMK_Templ) in “Steps to Deploy Azure Nodes with Microsoft HPC Pack”. diff --git a/articles/virtual-machines/windows/classic/media/python-django-web-app/django-helloworld-add-endpoint-new-portal.png b/articles/virtual-machines/windows/classic/media/python-django-web-app/django-helloworld-add-endpoint-new-portal.png new file mode 100644 index 0000000000000..b475c5cc1614b Binary files /dev/null and b/articles/virtual-machines/windows/classic/media/python-django-web-app/django-helloworld-add-endpoint-new-portal.png differ diff --git a/articles/virtual-machines/windows/classic/media/python-django-web-app/django-helloworld-add-endpoint-set-ports-new-portal.png b/articles/virtual-machines/windows/classic/media/python-django-web-app/django-helloworld-add-endpoint-set-ports-new-portal.png new file mode 100644 index 0000000000000..f6795e2c2307d Binary files /dev/null and b/articles/virtual-machines/windows/classic/media/python-django-web-app/django-helloworld-add-endpoint-set-ports-new-portal.png differ diff --git a/articles/virtual-machines/windows/classic/python-django-web-app.md b/articles/virtual-machines/windows/classic/python-django-web-app.md index a8c02817f694c..4fea8b05587d3 100644 --- a/articles/virtual-machines/windows/classic/python-django-web-app.md +++ b/articles/virtual-machines/windows/classic/python-django-web-app.md @@ -14,7 +14,7 @@ ms.workload: web ms.tgt_pltfrm: vm-windows ms.devlang: python ms.topic: article -ms.date: 08/04/2015 +ms.date: 05/31/2017 ms.author: huvalo --- @@ -51,12 +51,18 @@ A screenshot of the completed application appears next. 1. Follow the instructions given [here](tutorial.md) to create an Azure virtual machine of the Windows Server 2012 R2 Datacenter distribution. 2. Instruct Azure to direct port 80 traffic from the web to port 80 on the virtual machine: - * Navigate to your newly created virtual machine in the Azure classic portal and click the **ENDPOINTS** tab. - * Click the **ADD** button at the bottom of the screen. - ![add endpoint](./media/python-django-web-app/django-helloworld-addendpoint.png) - * Open up the **TCP** protocol's **PUBLIC PORT 80** as **PRIVATE PORT 80**. - ![][port80] -3. From the **DASHBOARD** tab, click **CONNECT** to use **Remote Desktop** to remotely log into the newly created Azure virtual machine. + * In the Azure portal, go to the dashboard and select your newly created virtual machine. + * Click **Endpoints** and then click **Add** at the top of the pane. + + ![add endpoint](./media/python-django-web-app/django-helloworld-add-endpoint-new-portal.png) + + * For **Name**, enter `HTTP`. Set the public and private TCP ports to 80. + + ![set port 80](./media/python-django-web-app/django-helloworld-add-endpoint-set-ports-new-portal.png) + + * When you're done, click **OK** at the bottom of the pane. + +3. In the dashboard, select your VM and click **Connect** at the top of the pane to use Remote Desktop to remotely log into the newly created Azure virtual machine. **Important Note:** All instructions below assume you logged into the virtual machine correctly and are issuing commands there rather than your local machine. diff --git a/articles/virtual-machines/windows/excel-cluster-hpcpack.md b/articles/virtual-machines/windows/excel-cluster-hpcpack.md index a918d333ad21b..b77f88e65a8f3 100644 --- a/articles/virtual-machines/windows/excel-cluster-hpcpack.md +++ b/articles/virtual-machines/windows/excel-cluster-hpcpack.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: vm-windows ms.workload: big-compute -ms.date: 04/11/2017 +ms.date: 06/01/2017 ms.author: danlep --- @@ -56,7 +56,7 @@ Use an Azure quickstart template to quickly deploy an HPC Pack cluster in the Az a. On the **Parameters** page, enter or modify values for the template parameters. (Click the icon next to each setting for help information.) Sample values are shown in the following screen. This example creates a cluster named *hpc01* in the *hpc.local* domain consisting of a head node and 2 compute nodes. The compute nodes are created from an HPC Pack VM image that includes Microsoft Excel. - ![Enter parameters][parameters] + ![Enter parameters][parameters-new-portal] > [!NOTE] > The head node VM is created automatically from the [latest Marketplace image](https://azure.microsoft.com/marketplace/partners/microsoft/hpcpack2012r2onwindowsserver2012r2/) of HPC Pack 2012 R2 on Windows Server 2012 R2. Currently the image is based on HPC Pack 2012 R2 Update 3. @@ -74,9 +74,9 @@ Use an Azure quickstart template to quickly deploy an HPC Pack cluster in the Az e. On the **Legal terms** page, review the terms. If you agree, click **Purchase**. Then, when you are finished setting the values for the template, click **Create**. 4. When the deployment completes (it typically takes around 30 minutes), export the cluster certificate file from the cluster head node. In a later step, you import this public certificate on the client computer to provide the server-side authentication for secure HTTP binding. - a. Connect to the head node by Remote Desktop from the Azure portal. + a. In the Azure portal, go to the dashboard, select the head node, and click **Connect** at the top of the page to connect using Remote Desktop. - ![Connect to the head node][connect] + b. Use standard procedures in Certificate Manager to export the head node certificate (located under Cert:\LocalMachine\My) without the private key. In this example, export *CN = hpc01.eastus.cloudapp.azure.com*. @@ -328,12 +328,12 @@ To use Http binding without an Azure storage queue, explicitly set the UseAzureQ ``` ### Use NetTcp binding -To use NetTcp binding, the configuration is similar to connecting to an on-premises cluster. You need to open a few endpoints on the head node VM. If you used the HPC Pack IaaS deployment script to create the cluster, for example, set the endpoints in the Azure classic portal as follows. +To use NetTcp binding, the configuration is similar to connecting to an on-premises cluster. You need to open a few endpoints on the head node VM. If you used the HPC Pack IaaS deployment script to create the cluster, for example, set the endpoints in the Azure portal as follows. 1. Stop the VM. 2. Add the TCP ports 9090, 9087, 9091, 9094 for the Session, Broker, Broker worker, and Data services, respectively - ![Configure endpoints][endpoint] + ![Configure endpoints][endpoint-new-portal] 3. Start the VM. The SOA client application requires no changes except altering the head name to the IaaS cluster full name. @@ -347,6 +347,7 @@ The SOA client application requires no changes except altering the head name to [github]: ./media/excel-cluster-hpcpack/github.png [template]: ./media/excel-cluster-hpcpack/template.png [parameters]: ./media/excel-cluster-hpcpack/parameters.png +[parameters-new-portal]: ./media/excel-cluster-hpcpack/parameters-new-portal.png [create]: ./media/excel-cluster-hpcpack/create.png [connect]: ./media/excel-cluster-hpcpack/connect.png [cert]: ./media/excel-cluster-hpcpack/cert.png @@ -355,4 +356,5 @@ The SOA client application requires no changes except altering the head name to [options]: ./media/excel-cluster-hpcpack/options.png [run]: ./media/excel-cluster-hpcpack/run.png [endpoint]: ./media/excel-cluster-hpcpack/endpoint.png +[endpoint-new-portal]: ./media/excel-cluster-hpcpack/endpoint-new-portal.png [udf]: ./media/excel-cluster-hpcpack/udf.png diff --git a/articles/virtual-machines/windows/media/excel-cluster-hpcpack/endpoint-new-portal.png b/articles/virtual-machines/windows/media/excel-cluster-hpcpack/endpoint-new-portal.png new file mode 100644 index 0000000000000..925c3303f1d21 Binary files /dev/null and b/articles/virtual-machines/windows/media/excel-cluster-hpcpack/endpoint-new-portal.png differ diff --git a/articles/virtual-machines/windows/media/excel-cluster-hpcpack/parameters-new-portal.png b/articles/virtual-machines/windows/media/excel-cluster-hpcpack/parameters-new-portal.png new file mode 100644 index 0000000000000..6e58acb1a2b26 Binary files /dev/null and b/articles/virtual-machines/windows/media/excel-cluster-hpcpack/parameters-new-portal.png differ diff --git a/articles/virtual-machines/windows/sqlclassic/virtual-machines-windows-classic-sql-server-premium-storage.md b/articles/virtual-machines/windows/sqlclassic/virtual-machines-windows-classic-sql-server-premium-storage.md index 68000ac6621a5..b9f3fa078006a 100644 --- a/articles/virtual-machines/windows/sqlclassic/virtual-machines-windows-classic-sql-server-premium-storage.md +++ b/articles/virtual-machines/windows/sqlclassic/virtual-machines-windows-classic-sql-server-premium-storage.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: vm-windows-sql-server ms.workload: iaas-sql-server -ms.date: 11/28/2016 +ms.date: 06/01/2017 ms.author: jroth --- @@ -1071,7 +1071,7 @@ For information for individual blobs: Get-AzureVM –ServiceName $destcloudsvc –Name $vmNameToMigrate | Add-AzureEndpoint -Name $epname -Protocol $prot -LocalPort $locport -PublicPort $pubport -ProbePort 59999 -ProbeIntervalInSeconds 5 -ProbeTimeoutInSeconds 11 -ProbeProtocol "TCP" -InternalLoadBalancerName $ilb -LBSetName $ilb -DirectServerReturn $true | Update-AzureVM - #STOP!!! CHECK in the Azure classic portal or Machine Endpoints through powershell that these Endpoints are created! + #STOP!!! CHECK in the Azure portal or Machine Endpoints through PowerShell that these Endpoints are created! #SET ACLs or Azure Network Security Groups & Windows FWs diff --git a/articles/virtual-machines/workloads/oracle/toc.md b/articles/virtual-machines/workloads/oracle/toc.md index 97d143b7da296..b04f1057bda1e 100644 --- a/articles/virtual-machines/workloads/oracle/toc.md +++ b/articles/virtual-machines/workloads/oracle/toc.md @@ -3,5 +3,7 @@ # Quickstarts ## [Create an Oracle DB](oracle-database-quick-create.md) # Tutorials -## [Configuring Oracle ASM](asm-configuration.md) -## [Configuring Oracle DataGuard](configuring-oracle-dataguard.md) +## [Configuring Oracle ASM](configure-oracle-asm.md) +## [Configuring Oracle DataGuard](configure-oracle-dataguard.md) +## [Configuring Oracle GoldenGate](configure-oracle-golden-gate.md) + diff --git a/articles/virtual-machines/workloads/sap/sap-hana-backup-guide.md b/articles/virtual-machines/workloads/sap/sap-hana-backup-guide.md index 8dedc8ed1732c..7679d1a5844d6 100644 --- a/articles/virtual-machines/workloads/sap/sap-hana-backup-guide.md +++ b/articles/virtual-machines/workloads/sap/sap-hana-backup-guide.md @@ -118,7 +118,7 @@ Azure Backup service uses Azure VM extensions to take care of the file system co The SAP HANA article [Planning Your Backup and Recovery Strategy](https://help.sap.com/saphelp_hanaplatform/helpdata/en/ef/085cd5949c40b788bba8fd3c65743e/content.htm) states a basic plan to do backups: - Storage snapshot (daily) -- Complete data backup using file or backing (once a week) +- Complete data backup using file or bacint format (once a week) - Automatic log backups Optionally, one could go completely without storage snapshots; they could be replaced by HANA delta backups, like incremental or differential backups (see [Delta Backups](https://help.sap.com/saphelp_hanaplatform/helpdata/en/c3/bb7e33bb571014a03eeabba4e37541/content.htm)). diff --git a/articles/vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md b/articles/vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md index daf98f792ea54..d3cfbb190f1b9 100644 --- a/articles/vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md +++ b/articles/vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: hero-article ms.tgt_pltfrm: na ms.workload: infrastructure-services -ms.date: 05/01/2017 +ms.date: 05/31/2017 ms.author: cherylmc --- @@ -40,7 +40,7 @@ A Site-to-Site VPN gateway connection is used to connect your on-premises networ Verify that you have met the following criteria before beginning your configuration: -* Verify that you want to work with the Resource Manager deployment model. [!INCLUDE [deployment models](../../includes/vpn-gateway-deployment-models-include.md)] +* Verify that you want to work with the Resource Manager deployment model. [!INCLUDE [deployment models](../../includes/vpn-gateway-classic-rm-include.md)] * A compatible VPN device and someone who is able to configure it. For more information about compatible VPN devices and device configuration, see [About VPN Devices](vpn-gateway-about-vpn-devices.md). * An externally facing public IPv4 address for your VPN device. This IP address cannot be located behind a NAT. * If you are unfamiliar with the IP address ranges located in your on-premises network configuration, you need to coordinate with someone who can provide those details for you. When you create this configuration, you must specify the IP address range prefixes that Azure will route to your on-premises location. None of the subnets of your on-premises network can over lap with the virtual network subnets that you want to connect to. @@ -69,19 +69,21 @@ Gateway IP Config = gwipconfig1  VPNType = RouteBased  GatewayType = Vpn  ConnectionName = myGWConnection + ``` + ## 1. Connect to your subscription -[!INCLUDE [vpn-gateway-ps-login](../../includes/vpn-gateway-ps-login-include.md)] +[!INCLUDE [PowerShell login](../../includes/vpn-gateway-ps-login-include.md)] ## 2. Create a virtual network and a gateway subnet -If you don't already have a virtual network, create one. When creating a virtual network, make sure that the address spaces you specify don't overlap any of the address spaces that you have on your on-premises network. For this configuration, you also need a gateway subnet. The virtual network gateway uses a gateway subnet that contains the IP addresses that are used by the VPN gateway services. When you create a gateway subnet, it must be named 'GatewaySubnet'. If you name it something else, you create a subnet, but Azure won't treat it as a gateway subnet. +If you don't already have a virtual network, create one. When creating a virtual network, make sure that the address spaces you specify don't overlap any of the address spaces that you have on your on-premises network. -The size of the gateway subnet that you specify depends on the VPN gateway configuration that you want to create. While it is possible to create a gateway subnet as small as /29, we recommend that you create a larger subnet that includes more addresses by selecting /27 or /28. Using the larger gateway subnet allows for enough IP addresses to accommodate possible future configurations. +[!INCLUDE [About gateway subnets](../../includes/vpn-gateway-about-gwsubnet-include.md)] -[!INCLUDE [vpn-gateway-no-nsg](../../includes/vpn-gateway-no-nsg-include.md)] +[!INCLUDE [No NSG warning](../../includes/vpn-gateway-no-nsg-include.md)] ### To create a virtual network and a gateway subnet @@ -135,21 +137,21 @@ Use the following values: * The *GatewayIPAddress* is the IP address of your on-premises VPN device. Your VPN device cannot be located behind a NAT. * The *AddressPrefix* is your on-premises address space. -- To add a local network gateway with a single address prefix: +To add a local network gateway with a single address prefix: ```powershell New-AzureRmLocalNetworkGateway -Name LocalSite -ResourceGroupName testrg ` -Location 'West US' -GatewayIpAddress '23.99.221.164' -AddressPrefix '10.0.0.0/24' ``` -- To add a local network gateway with multiple address prefixes: +To add a local network gateway with multiple address prefixes: ```powershell New-AzureRmLocalNetworkGateway -Name LocalSite -ResourceGroupName testrg ` -Location 'West US' -GatewayIpAddress '23.99.221.164' -AddressPrefix @('10.0.0.0/24','20.0.0.0/24') ``` -- To modify IP address prefixes for your local network gateway:
+To modify IP address prefixes for your local network gateway:
Sometimes your local network gateway prefixes change. The steps you take to modify your IP address prefixes depend on whether you have created a VPN gateway connection. See the [Modify IP address prefixes for a local network gateway](#modify) section of this article. ## 4. Request a Public IP address @@ -227,9 +229,9 @@ There are a few different ways to verify your VPN connection. [!INCLUDE [Verify connection](../../includes/vpn-gateway-verify-connection-ps-rm-include.md)] -## Connect to a virtual machine +## To connect to a virtual machine -[!INCLUDE [Connect to VM](../../includes/vpn-gateway-connect-vm-s2s-include.md)] +[!INCLUDE [Connect to a VM](../../includes/vpn-gateway-connect-vm-s2s-include.md)] ## Modify IP address prefixes for a local network gateway @@ -240,7 +242,7 @@ If the IP address prefixes that you want routed to your on-premises location cha ## Modify the gateway IP address for a local network gateway -[!INCLUDE [Modify gw IP](../../includes/vpn-gateway-modify-lng-gateway-ip-rm-include.md)] +[!INCLUDE [Modify gateway IP address](../../includes/vpn-gateway-modify-lng-gateway-ip-rm-include.md)] ## Next steps diff --git a/articles/vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-cli.md b/articles/vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-cli.md index e94ec73fcdbd9..4f5dbdc54a012 100644 --- a/articles/vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-cli.md +++ b/articles/vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-cli.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: hero-article ms.tgt_pltfrm: na ms.workload: infrastructure-services -ms.date: 04/24/2017 +ms.date: 06/01/2017 ms.author: cherylmc --- @@ -40,7 +40,7 @@ A Site-to-Site VPN gateway connection is used to connect your on-premises networ Verify that you have met the following criteria before beginning configuration: -* Verify that you want to work with the Resource Manager deployment model. [!INCLUDE [deployment models](../../includes/vpn-gateway-deployment-models-include.md)] +* Verify that you want to work with the Resource Manager deployment model. [!INCLUDE [deployment models](../../includes/vpn-gateway-classic-rm-include.md)] * A compatible VPN device and someone who is able to configure it. For more information about compatible VPN devices and device configuration, see [About VPN Devices](vpn-gateway-about-vpn-devices.md). * An externally facing public IPv4 address for your VPN device. This IP address cannot be located behind a NAT. * If you are unfamiliar with the IP address ranges located in your on-premises network configuration, you need to coordinate with someone who can provide those details for you. When you create this configuration, you must specify the IP address range prefixes that Azure will route to your on-premises location. None of the subnets of your on-premises network can over lap with the virtual network subnets that you want to connect to. @@ -182,6 +182,10 @@ After a short while, the connection will be established. If you want to use another method to verify your connection, see [Verify a VPN Gateway connection](vpn-gateway-verify-connection-resource-manager.md). +## To connect to a virtual machine + +[!INCLUDE [Connect to a VM](../../includes/vpn-gateway-connect-vm-s2s-include.md)] + ## Common tasks This section contains common commands that are helpful when working with site-to-site configurations. For the full list of CLI networking commands, see [Azure CLI - Networking](/cli/azure/network). diff --git a/articles/vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-portal.md b/articles/vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-portal.md index 32ac8ca84d8aa..4c313fbf2b66b 100644 --- a/articles/vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-portal.md +++ b/articles/vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-portal.md @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: hero-article ms.tgt_pltfrm: na ms.workload: infrastructure-services -ms.date: 05/02/2017 +ms.date: 05/31/2017 ms.author: cherylmc --- @@ -40,13 +40,14 @@ A Site-to-Site VPN gateway connection is used to connect your on-premises networ Verify that you have met the following criteria before beginning your configuration: -* Verify that you want to work with the Resource Manager deployment model. [!INCLUDE [deployment models](../../includes/vpn-gateway-deployment-models-include.md)] +* Verify that you want to work with the Resource Manager deployment model. [!INCLUDE [deployment models](../../includes/vpn-gateway-classic-rm-include.md)] * A compatible VPN device and someone who is able to configure it. For more information about compatible VPN devices and device configuration, see [About VPN Devices](vpn-gateway-about-vpn-devices.md). * An externally facing public IPv4 IP address for your VPN device. This IP address cannot be located behind a NAT. * If you are unfamiliar with the IP address ranges located in your on-premises network configuration, you need to coordinate with someone who can provide those details for you. When you create this configuration, you must specify the IP address range prefixes that Azure will route to your on-premises location. None of the subnets of your on-premises network can over lap with the virtual network subnets that you want to connect to. ### Example values -When using these steps as an exercise, you can use the following example values: + +The examples in this article use the following values. You can use these values to create a test environment, or refer to them to better understand the examples in this article. * **VNet Name:** TestVNet1 * **Address Space:** @@ -55,10 +56,10 @@ When using these steps as an exercise, you can use the following example values: * **Subnets:** * FrontEnd: 10.11.0.0/24 * BackEnd: 10.12.0.0/24 (optional for this exercise) - * GatewaySubnet: 10.11.255.0/27 +* **GatewaySubnet:** 10.11.255.0/27 * **Resource Group:** TestRG1 * **Location:** East US -* **DNS Server:** The IP address of your DNS server +* **DNS Server:** Optional. The IP address of your DNS server. * **Virtual Network Gateway Name:** VNet1GW * **Public IP:** VNet1GWIP * **VPN Type:** Route-based @@ -72,15 +73,14 @@ When using these steps as an exercise, you can use the following example values: [!INCLUDE [vpn-gateway-basic-vnet-rm-portal](../../includes/vpn-gateway-basic-vnet-s2s-rm-portal-include.md)] ## 2. Specify a DNS server -DNS is not required for Site-to-Site connections. However, if you want to have name resolution for resources that are deployed to your virtual network, you should specify a DNS server. This setting lets you specify the DNS server that you want to use for name resolution for this virtual network. It does not create a DNS server. + +DNS is not required to create a Site-to-Site connection. However, if you want to have name resolution for resources that are deployed to your virtual network, you should specify a DNS server. This setting lets you specify the DNS server that you want to use for name resolution for this virtual network. It does not create a DNS server. For more information about name resolution, see [Name Resolution for VMs and role instances](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md). [!INCLUDE [vpn-gateway-add-dns-rm-portal](../../includes/vpn-gateway-add-dns-rm-portal-include.md)] ## 3. Create the gateway subnet -The virtual network gateway uses a gateway subnet that contains the IP addresses that are used by the VPN gateway services. When you create a gateway subnet, it must be named 'GatewaySubnet'. If you name it something else, your connection configuration fails. - -The size of the gateway subnet that you specify depends on the VPN gateway configuration that you want to create. While it is possible to create a gateway subnet as small as /29, we recommend that you create a larger subnet that includes more addresses by selecting /27 or /28. Using a larger gateway subnet allows for enough IP addresses to accommodate possible future configurations. +[!INCLUDE [vpn-gateway-aboutgwsubnet](../../includes/vpn-gateway-about-gwsubnet-include.md)] [!INCLUDE [vpn-gateway-add-gwsubnet-rm-portal](../../includes/vpn-gateway-add-gwsubnet-s2s-rm-portal-include.md)] @@ -91,9 +91,9 @@ The size of the gateway subnet that you specify depends on the VPN gateway confi ## 5. Create the local network gateway -The local network gateway typically refers to your on-premises location. You give the site a name by which Azure can refer to it, then specify the IP address of the on-premises VPN device to which you will create a connection. You also specify the IP address prefixes that will be routed through the VPN gateway to the VPN device. The address prefixes you specify are the prefixes located on your on-premises network. If your on-premises network changes, you can easily update the prefixes. +The local network gateway typically refers to your on-premises location. You give the site a name by which Azure can refer to it, then specify the IP address of the on-premises VPN device to which you will create a connection. You also specify the IP address prefixes that will be routed through the VPN gateway to the VPN device. The address prefixes you specify are the prefixes located on your on-premises network. If your on-premises network changes or you need to change the public IP address for the VPN device, you can easily update the values later. -[!INCLUDE [vpn-gateway-add-lng-s2s-rm-portal](../../includes/vpn-gateway-add-lng-s2s-rm-portal-include.md)] +[!INCLUDE [Add local network gateway](../../includes/vpn-gateway-add-lng-s2s-rm-portal-include.md)] ## 6. Configure your VPN device @@ -103,17 +103,21 @@ Site-to-Site connections to an on-premises network require a VPN device. In this - The Public IP address of your virtual network gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI. To find the Public IP address of your VPN gateway using the Azure portal, navigate to **Virtual network gateways**, then click the name of your gateway. -[!INCLUDE [vpn-gateway-configure-vpn-device-rm](../../includes/vpn-gateway-configure-vpn-device-rm-include.md)] +[!INCLUDE [Configure a VPN device](../../includes/vpn-gateway-configure-vpn-device-rm-include.md)] ## 7. Create the VPN connection Create the Site-to-Site VPN connection between your virtual network gateway and your on-premises VPN device. -[!INCLUDE [vpn-gateway-add-site-to-site-connection-rm-portal](../../includes/vpn-gateway-add-site-to-site-connection-s2s-rm-portal-include.md)] +[!INCLUDE [Add connections](../../includes/vpn-gateway-add-site-to-site-connection-s2s-rm-portal-include.md)] ## 8. Verify the VPN connection -[!INCLUDE [Azure portal](../../includes/vpn-gateway-verify-connection-portal-rm-include.md)] +[!INCLUDE [Verify - Azure portal](../../includes/vpn-gateway-verify-connection-portal-rm-include.md)] + +## To connect to a virtual machine + +[!INCLUDE [Connect to a VM](../../includes/vpn-gateway-connect-vm-s2s-include.md)] ## Next steps diff --git a/includes/iot-hub-get-started-extended.md b/includes/iot-hub-get-started-extended.md new file mode 100644 index 0000000000000..b28a1153954b9 --- /dev/null +++ b/includes/iot-hub-get-started-extended.md @@ -0,0 +1,26 @@ +## Extended IoT scenarios: Use other Azure services and tools + +When you have connected your device to IoT Hub, you can explore additional scenarios that use other Azure tools and services: + +| Scenario | Azure service or tool | +|----------------------------------------------------------- |------------------------------------| +| [Manage IoT Hub messages][Mg_IoT_Hub_Msg] | iothub-explorer tool | +| [Manage your IoT device][Mg_IoT_Dv] | iothub-explorer tool | +| [Save IoT Hub messages to Azure storage][Sv_IoT_Msg_Stor] | Azure table storage | +| [Visualize sensor data][Vis_Data] | Microsoft Power BI, Azure Web Apps | +| [Forecast weather with sensor data][Weather_Forecast] | Azure Machine Learning | +| [Automatic anomaly detection and reaction][Anomaly_Detect] | Azure Logic Apps | + +## Next steps + +When you have completed these tutorials, you can further explore the capabilities of IoT Hub in the [Developer guide][lnk-dev-guide]. You can find additional tutorials in the [How To][lnk-how-to] section. + + +[Mg_IoT_Hub_Msg]: ../articles/iot-hub/iot-hub-explorer-cloud-device-messaging.md +[Mg_IoT_Dv]: ../articles/iot-hub/iot-hub-device-management-iothub-explorer.md +[Sv_IoT_Msg_Stor]: ../articles/iot-hub/iot-hub-store-data-in-azure-table-storage.md +[Vis_Data]: ../articles/iot-hub/iot-hub-live-data-visualization-in-power-bi.md +[Weather_Forecast]: ../articles/iot-hub/iot-hub-weather-forecast-machine-learning.md +[Anomaly_Detect]: ../articles/iot-hub/iot-hub-monitoring-notifications-with-azure-logic-apps.md +[lnk-dev-guide]: ../articles/iot-hub/iot-hub-devguide.md +[lnk-how-to]: ../articles/iot-hub/iot-hub-how-to.md \ No newline at end of file diff --git a/includes/media/vpn-gateway-add-dns-rm-portal/add_dns_server.png b/includes/media/vpn-gateway-add-dns-rm-portal/add_dns_server.png new file mode 100644 index 0000000000000..a3a4eaef44fab Binary files /dev/null and b/includes/media/vpn-gateway-add-dns-rm-portal/add_dns_server.png differ diff --git a/includes/media/vpn-gateway-add-gw-s2s-rm-portal-include/newgw.png b/includes/media/vpn-gateway-add-gw-s2s-rm-portal-include/newgw.png index fc1ffe8b483c2..2cb99b880fa04 100644 Binary files a/includes/media/vpn-gateway-add-gw-s2s-rm-portal-include/newgw.png and b/includes/media/vpn-gateway-add-gw-s2s-rm-portal-include/newgw.png differ diff --git a/includes/media/vpn-gateway-add-gw-s2s-rm-portal-include/pip.png b/includes/media/vpn-gateway-add-gw-s2s-rm-portal-include/pip.png new file mode 100644 index 0000000000000..1f43932af4c27 Binary files /dev/null and b/includes/media/vpn-gateway-add-gw-s2s-rm-portal-include/pip.png differ diff --git a/includes/media/vpn-gateway-add-gw-s2s-rm-portal-include/vnet_gw.png b/includes/media/vpn-gateway-add-gw-s2s-rm-portal-include/vnet_gw.png new file mode 100644 index 0000000000000..e42588799a9f1 Binary files /dev/null and b/includes/media/vpn-gateway-add-gw-s2s-rm-portal-include/vnet_gw.png differ diff --git a/includes/media/vpn-gateway-add-gwsubnet-s2s-rm-portal-include/add-gw-subnet.png b/includes/media/vpn-gateway-add-gwsubnet-s2s-rm-portal-include/add-gw-subnet.png new file mode 100644 index 0000000000000..26758ba541914 Binary files /dev/null and b/includes/media/vpn-gateway-add-gwsubnet-s2s-rm-portal-include/add-gw-subnet.png differ diff --git a/includes/media/vpn-gateway-add-gwsubnet-s2s-rm-portal-include/gwsubnetip.png b/includes/media/vpn-gateway-add-gwsubnet-s2s-rm-portal-include/gwsubnetip.png new file mode 100644 index 0000000000000..3ad300e2585bb Binary files /dev/null and b/includes/media/vpn-gateway-add-gwsubnet-s2s-rm-portal-include/gwsubnetip.png differ diff --git a/includes/media/vpn-gateway-add-lng-s2s-rm-portal-include/createlng.png b/includes/media/vpn-gateway-add-lng-s2s-rm-portal-include/createlng.png index 1a80affa98da6..c29e72fb8c88e 100644 Binary files a/includes/media/vpn-gateway-add-lng-s2s-rm-portal-include/createlng.png and b/includes/media/vpn-gateway-add-lng-s2s-rm-portal-include/createlng.png differ diff --git a/includes/media/vpn-gateway-add-site-to-site-connection-s2s-rm-portal-include/connection1.png b/includes/media/vpn-gateway-add-site-to-site-connection-s2s-rm-portal-include/connection1.png new file mode 100644 index 0000000000000..d8e75228ff411 Binary files /dev/null and b/includes/media/vpn-gateway-add-site-to-site-connection-s2s-rm-portal-include/connection1.png differ diff --git a/includes/storage-development-environment-include.md b/includes/storage-development-environment-include.md index 8ef47c6b67b72..9015c4908141d 100644 --- a/includes/storage-development-environment-include.md +++ b/includes/storage-development-environment-include.md @@ -74,7 +74,7 @@ To configure your connection string, open the `app.config` file from Solution Ex For example, your configuration setting appears similar to: ```xml - + ``` To target the storage emulator, you can use a shortcut that maps to the well-known account name and key. In that case, your connection string setting is: diff --git a/includes/vpn-gateway-about-gwsubnet-include.md b/includes/vpn-gateway-about-gwsubnet-include.md new file mode 100644 index 0000000000000..7d59f63eb7a3d --- /dev/null +++ b/includes/vpn-gateway-about-gwsubnet-include.md @@ -0,0 +1,3 @@ +The virtual network gateway uses specific subnet called the 'GatewaySubnet'. The gateway subnet contains the IP addresses that are used by the VPN gateway services. When you create a gateway subnet, it must be named 'GatewaySubnet'. Naming a subnet 'GatewaySubnet' tells Azure where to create the gateway services. If you name the subnet something else, your VPN gateway configuration will fail. + +The IP addresses in the GatewaySubnet are allocated to the gateway services. When you create the GatewaySubnet, you specify the number of IP addresses that the subnet contains. The size of the GatewaySubnet that you specify depends on the VPN gateway configuration that you want to create. While it is possible to create a GatewaySubnet as small as /29, we recommend that you create a larger subnet that includes more addresses by selecting /27 or /28. Using a larger gateway subnet allows for enough IP addresses to accommodate possible future configurations. \ No newline at end of file diff --git a/includes/vpn-gateway-add-dns-rm-portal-include.md b/includes/vpn-gateway-add-dns-rm-portal-include.md index 6db79e2b26da3..addd47d44847e 100644 --- a/includes/vpn-gateway-add-dns-rm-portal-include.md +++ b/includes/vpn-gateway-add-dns-rm-portal-include.md @@ -1,5 +1,8 @@ -1. On the **Settings** page for your virtual network, navigate to **DNS Servers** and click to open the DNS servers blade. -2. On the **DNS Servers** page, under **DNS servers**, select **Custom**. -3. In the **DNS Server** field, in the **Add DNS server** box, enter the IP address of the DNS server that you want to use for name resolution. When you are done adding DNS servers, click **Save** at the top of the blade to save your configuration. +1. On the **Settings** page for your virtual network, navigate to **DNS Servers** and click to open the **DNS servers** blade. - ![Custom DNS](./media/vpn-gateway-add-dns-rm-portal/add_dns.png) \ No newline at end of file + ![Add DNS server](./media/vpn-gateway-add-dns-rm-portal/add_dns_server.png "Add DNS Server") + + - **DNS Servers:** Select select **Custom**. + - **Add DNS server:** Enter the IP address of the DNS server that you want to use for name resolution. + +2. When you are done adding DNS servers, click **Save** at the top of the blade. \ No newline at end of file diff --git a/includes/vpn-gateway-add-gw-s2s-rm-portal-include.md b/includes/vpn-gateway-add-gw-s2s-rm-portal-include.md index 09253f9b890ce..1662d51062314 100644 --- a/includes/vpn-gateway-add-gw-s2s-rm-portal-include.md +++ b/includes/vpn-gateway-add-gw-s2s-rm-portal-include.md @@ -1,21 +1,24 @@ -1. On the left side of the portal page, click **+** and type 'Virtual Network Gateway' in search. In **Results**, locate and click **Virtual network gateway**. At the bottom of the **Virtual network gateway** blade, click **Create**. This opens the **Create virtual network gateway** blade. -2. On the **Create virtual network gateway** blade, fill in the values for your virtual network gateway. +1. On the left side of the portal page, click **+** and type 'Virtual Network Gateway' in search. In **Results**, locate and click **Virtual network gateway**. +2. At the bottom of the 'Virtual network gateway' blade, click **Create**. This opens the **Create virtual network gateway** blade. - ![Create virtual network gateway blade fields](./media/vpn-gateway-add-gw-s2s-rm-portal-include/newgw.png "New gateway") -3. **Name**: Name your gateway. This is not the same as naming a gateway subnet. It's the name of the gateway object you are creating. -4. **Gateway type**: Select **VPN**. VPN gateways use the virtual network gateway type **VPN**. -5. **VPN type**: Select the VPN type that is specified for your configuration. Most configurations require a Route-based VPN type. -6. **SKU**: Select the gateway SKU from the dropdown. The SKUs listed in the dropdown depend on the VPN type you select. -7. **Location**: You may need to scroll to see Location. Adjust the **Location** field to point to the location where your virtual network is located. If the location is not pointing to the region where your virtual network resides, the virtual network will not appear in the next step 'Choose a virtual network' dropdown. -8. **Virtual network**: Choose the virtual network to which you want to add this gateway. Click **Virtual network** to open the **Choose a virtual network** blade. Select the VNet. If you don't see your VNet, make sure the **Location** field is pointing to the region in which your virtual network is located. -9. **Create public IP address**: This blade creates a public IP address object to which a public IP address will be dynamically assigned. Click **Public IP address** to open the **Choose public IP address** blade. Click **+Create New** to open the **Create public IP address blade**. Input a name for your public IP address. Click **OK** to save your changes to this blade. The IP address is dynamically assigned when the VPN gateway is created. VPN Gateway currently only supports *Dynamic* Public IP address allocation. However, this does not mean that the IP address changes after it has been assigned to your VPN gateway. The only time the Public IP address changes is when the gateway is deleted and re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of your VPN gateway. + ![Create virtual network gateway blade fields](./media/vpn-gateway-add-gw-s2s-rm-portal-include/vnet_gw.png "New gateway") - ![Create public IP](./media/vpn-gateway-add-gw-s2s-rm-portal-include/createpip.png "Create PIP") -10. **Subscription**: Verify that the correct subscription is selected. -11. **Resource group**: This setting is determined by the Virtual Network that you select. -12. Don't adjust the **Location** after you've specified the previous settings. -13. Verify the settings. You can select **Pin to dashboard** at the bottom of the blade if you want your gateway to appear on the dashboard. -14. Click **Create** to begin creating the gateway. The settings will be validated and you'll see the "Deploying Virtual network gateway" tile on the dashboard. Creating a gateway can take up to 45 minutes. You may need to refresh your portal page to see the completed status. - - ![Create gateway](./media/vpn-gateway-add-gw-s2s-rm-portal-include/creategw.png "Create gateway") -15. After the gateway is created, view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway will appear as a connected device. You can click the connected device (your virtual network gateway) to view more information. \ No newline at end of file +3. On the **Create virtual network gateway** blade, specify the values for your virtual network gateway. + + - **Name**: Name your gateway. This is not the same as naming a gateway subnet. It's the name of the gateway object you are creating. + - **Gateway type**: Select **VPN**. VPN gateways use the virtual network gateway type **VPN**. + - **VPN type**: Select the VPN type that is specified for your configuration. Most configurations require a Route-based VPN type. + - **SKU**: Select the gateway SKU from the dropdown. The SKUs listed in the dropdown depend on the VPN type you select. For more information about gateway SKUs, see [Gateway SKUs](../articles/vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsku). + - **Location**: You may need to scroll to see Location. Adjust the **Location** field to point to the location where your virtual network is located. If the location is not pointing to the region where your virtual network resides, the virtual network will not appear in the next step 'Choose a virtual network' dropdown. + - **Virtual network**: Choose the virtual network to which you want to add this gateway. Click **Virtual network** to open the 'Choose a virtual network' blade. Select the VNet. If you don't see your VNet, make sure the Location field is pointing to the region in which your virtual network is located. + - **Public IP address**: The 'Create public IP address' blade creates a public IP address object. The public IP address is dynamically assigned when the VPN gateway is created. VPN Gateway currently only supports *Dynamic* Public IP address allocation. However, this does not mean that the IP address changes after it has been assigned to your VPN gateway. The only time the Public IP address changes is when the gateway is deleted and re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of your VPN gateway. + + - First, click **Public IP address** to open the 'Choose public IP address' blade, then click **+Create New** to open the 'Create public IP address' blade. + - Next, input a **Name** for your public IP address, then click **OK** at the bottom of this blade to save your changes. + + ![Create public IP](./media/vpn-gateway-add-gw-s2s-rm-portal-include/pip.png "Create PIP") + +4. Verify the settings. You can select **Pin to dashboard** at the bottom of the blade if you want your gateway to appear on the dashboard. +5. Click **Create** to begin creating the VPN gateway. The settings will be validated and you'll see the "Deploying Virtual network gateway" tile on the dashboard. Creating a gateway can take up to 45 minutes. You may need to refresh your portal page to see the completed status. + +After the gateway is created, view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway will appear as a connected device. You can click the connected device (your virtual network gateway) to view more information. \ No newline at end of file diff --git a/includes/vpn-gateway-add-gwsubnet-s2s-rm-portal-include.md b/includes/vpn-gateway-add-gwsubnet-s2s-rm-portal-include.md index cc906214ff36e..8e00d48c062bd 100644 --- a/includes/vpn-gateway-add-gwsubnet-s2s-rm-portal-include.md +++ b/includes/vpn-gateway-add-gwsubnet-s2s-rm-portal-include.md @@ -1,9 +1,9 @@ 1. In the portal, navigate to the virtual network for which you want to create a virtual network gateway. 2. In the **Settings** section of your VNet blade, click **Subnets** to expand the Subnets blade. -3. On the **Subnets** blade, click **+Gateway subnet** at the top. This will open the **Add subnet** blade. - - ![Add the gateway subnet](./media/vpn-gateway-add-gwsubnet-s2s-rm-portal-include/addgwsubnet.png "Add the gateway subnet") -4. The **Name** for your subnet will automatically be filled in with the value 'GatewaySubnet'. This value is required in order for Azure to recognize the subnet as the gateway subnet. Adjust the auto-filled **Address range** values to match your configuration requirements. +3. On the **Subnets** blade, click **+Gateway subnet** at the top. This will open the **Add subnet** blade. - ![Adding the subnet](./media/vpn-gateway-add-gwsubnet-s2s-rm-portal-include/gwsubnet.png "Adding the subnet") + ![Add the gateway subnet](./media/vpn-gateway-add-gwsubnet-s2s-rm-portal-include/add-gw-subnet.png "Add the gateway subnet") +4. The **Name** for your subnet will automatically be filled in with the value 'GatewaySubnet'. This value is required in order for Azure to recognize the subnet as the GatewaySubnet. Adjust the auto-filled **Address range** values to match your configuration requirements. + + ![Adding the gateway subnet](./media/vpn-gateway-add-gwsubnet-s2s-rm-portal-include/gwsubnetip.png "Adding the gateway subnet") 5. Click **OK** at the bottom of the blade to create the subnet. \ No newline at end of file diff --git a/includes/vpn-gateway-add-lng-s2s-rm-portal-include.md b/includes/vpn-gateway-add-lng-s2s-rm-portal-include.md index a058f96ec55e9..91e644a8ad7f0 100644 --- a/includes/vpn-gateway-add-lng-s2s-rm-portal-include.md +++ b/includes/vpn-gateway-add-lng-s2s-rm-portal-include.md @@ -1,10 +1,15 @@ -1. In the portal, from **All resources**, click **+Add**. In the **Everything** blade search box, type **Local network gateway**, then click to search. This will return a list. Click **Local network gateway** to open the blade, then click **Create** to open the **Create local network gateway** blade. - - ![create local network gateway](./media/vpn-gateway-add-lng-s2s-rm-portal-include/createlng.png) -2. On the **Create local network gateway blade**, specify a **Name** for your local network gateway object. -3. Specify a valid public **IP address** for the VPN device or virtual network gateway to which you want to connect.
This is the public IP address of the VPN device that you want to connect to. It cannot be behind NAT and has to be reachable by Azure. *Use your own values, not the values shown in the screenshot*. -4. **Address Space** refers to the address ranges for the network that this local network represents. You can add multiple address space ranges. Make sure that the ranges you specify here do not overlap with ranges of other networks that you want to connect to. Azure will route the address range that you specify to the on-premises VPN device IP address. *Use your own values here, not the values shown in the screenshot*. -5. For **Subscription**, verify that the correct subscription is showing. -6. For **Resource Group**, select the resource group that you want to use. You can either create a new resource group, or select one that you have already created. -7. For **Location**, select the location that this object will be created in. You may want to select the same location that your VNet resides in, but you are not required to do so. -8. Click **Create** to create the local network gateway. \ No newline at end of file +1. In the portal, from **All resources**, click **+Add**. +2. In the **Everything** blade search box, type **Local network gateway**, then click to search. This will return a list. Click **Local network gateway** to open the blade, then click **Create** to open the **Create local network gateway** blade. + + ![create local network gateway](./media/vpn-gateway-add-lng-s2s-rm-portal-include/createlng.png) + +3. On the **Create local network gateway blade**, specify the values for your local network gateway. + + - **Name:** Specify a name for your local network gateway object. + - **IP address:** This is the public IP address of the VPN device that you want Azure to connect to. Specify a valid public IP address. The IP address cannot be behind NAT and has to be reachable by Azure. If you don't have the IP address right now, you can use the values shown in the screen shot, but you'll need to go back and replace your placeholder IP address with the public IP address of your VPN device. Otherwise, Azure will not be able to connect. + - **Address Space** refers to the address ranges for the network that this local network represents. You can add multiple address space ranges. Make sure that the ranges you specify here do not overlap with ranges of other networks that you want to connect to. Azure will route the address range that you specify to the on-premises VPN device IP address. *Use your own values here, not the values shown in the screenshot*. + - **Subscription:** Verify that the correct subscription is showing. + - **Resource Group:** Select the resource group that you want to use. You can either create a new resource group, or select one that you have already created. + - **Location:** Select the location that this object will be created in. You may want to select the same location that your VNet resides in, but you are not required to do so. + +4. When you have finished specifying the values, click **Create** at the bottom of the blade to create the local network gateway. \ No newline at end of file diff --git a/includes/vpn-gateway-add-site-to-site-connection-s2s-rm-portal-include.md b/includes/vpn-gateway-add-site-to-site-connection-s2s-rm-portal-include.md index 8a4e0eca6c79b..b76d8ad15cbbe 100644 --- a/includes/vpn-gateway-add-site-to-site-connection-s2s-rm-portal-include.md +++ b/includes/vpn-gateway-add-site-to-site-connection-s2s-rm-portal-include.md @@ -1,15 +1,16 @@ -1. Locate your virtual network gateway. -2. Click **Connections**. At the top of the Connections blade, click **+Add** to open the **Add connection** blade. - - ![Create Site-to-Site connection](./media/vpn-gateway-add-site-to-site-connection-s2s-rm-portal-include/connection.png) -3. On the **Add connection** blade, **Name** your connection. -4. For **Connection type**, select **Site-to-site(IPSec)**. -5. For **Virtual network gateway**, the value is fixed because you are connecting from this gateway. -6. For **Local network gateway**, click **Choose a local network gateway** and select the local network gateway that you want to use. -7. For **Shared Key**, the value here must match the value that you are using for your local on-premises VPN device. In the example, we used 'abc123', but you can (and should) use something more complex. The important thing is that the value you specify here must be the same value that you specified when configuring your VPN device. -8. The remaining values for **Subscription**, **Resource Group**, and **Location** are fixed. -9. Click **OK** to create your connection. You'll see *Creating Connection* flash on the screen. -10. When the connection is complete, it appears in the **Connections** blade of the virtual network gateway. - - ![Create Site-to-Site connection](./media/vpn-gateway-add-site-to-site-connection-s2s-rm-portal-include/connectionstatus450.png) +1. Navigate to and open the blade for your virtual network gateway. There are multiple ways to navigate. In our example, we navigated to the gateway 'VNet1GW' by going to **TestVNet1 -> Overview -> Connected devices -> VNet1GW**. +2. On the blade for VNet1GW, click **Connections**. At the top of the Connections blade, click **+Add** to open the **Add connection** blade. + ![Create Site-to-Site connection](./media/vpn-gateway-add-site-to-site-connection-s2s-rm-portal-include/connection1.png) + +3. On the **Add connection** blade, fill in the values to create your connection. + + - **Name:** Name your connection. We use **VNet1toSite2** in our example. + - **Connection type:** Select **Site-to-site(IPSec)**. + - **Virtual network gateway:** The value is fixed because you are connecting from this gateway. + - **Local network gateway:** Click **Choose a local network gateway** and select the local network gateway that you want to use. In our example, we use **Site2**. + - **Shared Key:** the value here must match the value that you are using for your local on-premises VPN device. In the example, we used 'abc123', but you can (and should) use something more complex. The important thing is that the value you specify here must be the same value that you specified when configuring your VPN device. + - The remaining values for **Subscription**, **Resource Group**, and **Location** are fixed. + +4. Click **OK** to create your connection. You'll see *Creating Connection* flash on the screen. +5. You can view the connection in the **Connections** blade of the virtual network gateway. The Status will go from *Unknown* to *Connecting*, and then to *Succeeded*. \ No newline at end of file diff --git a/includes/vpn-gateway-basic-vnet-s2s-rm-portal-include.md b/includes/vpn-gateway-basic-vnet-s2s-rm-portal-include.md index 316e188b67b51..8f8b84ea14530 100644 --- a/includes/vpn-gateway-basic-vnet-s2s-rm-portal-include.md +++ b/includes/vpn-gateway-basic-vnet-s2s-rm-portal-include.md @@ -1,17 +1,17 @@ -To create a VNet in the Resource Manager deployment model by using the Azure portal, follow the steps below. The screenshots are provided as examples. Be sure to replace the values with your own. For more information about working with virtual networks, see the [Virtual Network Overview](../articles/virtual-network/virtual-networks-overview.md). +To create a VNet in the Resource Manager deployment model by using the Azure portal, follow the steps below. Use the [example values](#values) if you are using these steps as a tutorial. If you are not doing these steps as a tutorial, be sure to replace the values with your own. For more information about working with virtual networks, see the [Virtual Network Overview](../articles/virtual-network/virtual-networks-overview.md). 1. From a browser, navigate to the [Azure portal](http://portal.azure.com) and sign in with your Azure account. 2. Click **New**. In the **Search the marketplace** field, type 'Virtual Network'. Locate **Virtual Network** from the returned list and click to open the **Virtual Network** blade. -3. Near the bottom of the Virtual Network blade, from the **Select a deployment model** list, select **Resource Manager**, and then click **Create**. -4. On the **Create virtual network** blade, configure the VNet settings. When you fill in the fields, the red exclamation mark will become a green check mark when the characters entered in the field are valid. -5. The **Create virtual network** blade looks similar to the following example. There may be values that are auto-filled. If so, replace the values with your own. - +3. Near the bottom of the Virtual Network blade, from the **Select a deployment model** list, select **Resource Manager**, and then click **Create**. This opens the 'Create virtual network' blade. + ![Create virtual network blade](./media/vpn-gateway-basic-vnet-s2s-rm-portal-include/createvnet.png "Create virtual network blade") -6. **Name**: Enter the name for your Virtual Network. -7. **Address space**: Enter the address space. If you have multiple address spaces to add, add your first address space. You can add additional address spaces later, after creating the VNet. Make sure that the address space that you specify does not overlap with the address space for your on-premises location. -8. **Subnet name**: Add the subnet name and subnet address range. You can add additional subnets later, after creating the VNet. -9. **Subscription**: Verify that the Subscription listed is the correct one. You can change subscriptions by using the drop-down. -10. **Resource group**: Select an existing resource group, or create a new one by typing a name for your new resource group. If you are creating a new group, name the resource group according to your planned configuration values. For more information about resource groups, visit [Azure Resource Manager Overview](../articles/azure-resource-manager/resource-group-overview.md#resource-groups). -11. **Location**: Select the location for your VNet. The location determines where the resources that you deploy to this VNet will reside. -12. Select **Pin to dashboard** if you want to be able to find your VNet easily on the dashboard, and then click **Create**. -13. After clicking **Create**, you will see a tile on your dashboard that will reflect the progress of your VNet. The tile changes as the VNet is being created. \ No newline at end of file +4. On the **Create virtual network** blade, configure the VNet settings. When you fill in the fields, the red exclamation mark becomes a green check mark when the characters entered in the field are valid. + + - **Name**: Enter the name for your virtual network. In this example, we use TestVNet1. + - **Address space**: Enter the address space. If you have multiple address spaces to add, add your first address space. You can add additional address spaces later, after creating the VNet. Make sure that the address space that you specify does not overlap with the address space for your on-premises location. + - **Subnet name**: Add the first subnet name and subnet address range. You can add additional subnets and the gateway subnet later, after creating this VNet. + - **Subscription**: Verify that the subscription listed is the correct one. You can change subscriptions by using the drop-down. + - **Resource group**: Select an existing resource group, or create a new one by typing a name for your new resource group. If you are creating a new group, name the resource group according to your planned configuration values. For more information about resource groups, visit [Azure Resource Manager Overview](../articles/azure-resource-manager/resource-group-overview.md#resource-groups). + - **Location**: Select the location for your VNet. The location determines where the resources that you deploy to this VNet will reside. + +5. Select **Pin to dashboard** if you want to be able to find your VNet easily on the dashboard, and then click **Create**. After clicking **Create**, you will see a tile on your dashboard that will reflect the progress of your VNet. The tile changes as the VNet is being created. \ No newline at end of file diff --git a/includes/vpn-gateway-connect-vm-s2s-include.md b/includes/vpn-gateway-connect-vm-s2s-include.md index cc745c3412e6a..da38529f82832 100644 --- a/includes/vpn-gateway-connect-vm-s2s-include.md +++ b/includes/vpn-gateway-connect-vm-s2s-include.md @@ -1,6 +1,6 @@ You can connect to a VM that is deployed to your VNet by creating a Remote Desktop Connection to your VM. The best way to initially verify that you can connect to your VM is to connect by using its private IP address, rather than computer name. That way, you are testing to see if you can connect, not whether name resolution is configured properly. -1. Locate the private IP address. You can find the private IP address of a VM by either looking at the properties for the VM in the Azure portal, or by using PowerShell. +1. Locate the private IP address. You can find the private IP address of a VM in multiple ways. Below, we show the steps for the Azure portal and for PowerShell. - Azure portal - Locate your virtual machine in the Azure portal. View the properties for the VM. The private IP address is listed. diff --git a/includes/vpn-gateway-gwsku-include.md b/includes/vpn-gateway-gwsku-include.md index 3219986b038dd..7a753358bea27 100644 --- a/includes/vpn-gateway-gwsku-include.md +++ b/includes/vpn-gateway-gwsku-include.md @@ -2,9 +2,9 @@ When you create a virtual network gateway, you need to specify the gateway SKU t |**SKU** | **S2S/VNet-to-VNet
Tunnels** | **P2S
Connections** | **Aggregate
Throughput** | |--- | --- | --- | --- | -|**VpnGw1**| Max. 30 | Max. 256 | 500 Mbps | -|**VpnGw2**| Max. 30 | Max. 512 | 1 Gbps | -|**VpnGw3**| Max. 30 | Max. 512 | 1.25 Gbps | +|**VpnGw1**| Max. 30 | Max. 128 | 500 Mbps | +|**VpnGw2**| Max. 30 | Max. 128 | 1 Gbps | +|**VpnGw3**| Max. 30 | Max. 128 | 1.25 Gbps | |**Basic** | Max. 10 | Max. 128 | 100 Mbps | | | | | | diff --git a/includes/vpn-gateway-modify-ip-prefix-rm-include.md b/includes/vpn-gateway-modify-ip-prefix-rm-include.md index 8d1d7e27ff45a..a7b563fa9c35c 100644 --- a/includes/vpn-gateway-modify-ip-prefix-rm-include.md +++ b/includes/vpn-gateway-modify-ip-prefix-rm-include.md @@ -1,21 +1,21 @@ ### To modify local network gateway IP address prefixes - no gateway connection -- To add additional address prefixes: - - ```powershell - $local = Get-AzureRmLocalNetworkGateway -Name MyLocalNetworkGWName -ResourceGroupName MyRGName ` - Set-AzureRmLocalNetworkGateway -LocalNetworkGateway $local ` - -AddressPrefix @('10.0.0.0/24','20.0.0.0/24','30.0.0.0/24') - ``` - -- To remove address prefixes:
- Leave out the prefixes that you no longer need. In this example, we no longer need prefix 20.0.0.0/24 (from the previous example), so we update the local network gateway, excluding that prefix. - - ```powershell - $local = Get-AzureRmLocalNetworkGateway -Name MyLocalNetworkGWName -ResourceGroupName MyRGName ` - Set-AzureRmLocalNetworkGateway -LocalNetworkGateway $local ` - -AddressPrefix @('10.0.0.0/24','30.0.0.0/24') - ``` +To add additional address prefixes: + +```powershell +$local = Get-AzureRmLocalNetworkGateway -Name MyLocalNetworkGWName -ResourceGroupName MyRGName ` +Set-AzureRmLocalNetworkGateway -LocalNetworkGateway $local ` +-AddressPrefix @('10.0.0.0/24','20.0.0.0/24','30.0.0.0/24') +``` + +To remove address prefixes:
+Leave out the prefixes that you no longer need. In this example, we no longer need prefix 20.0.0.0/24 (from the previous example), so we update the local network gateway, excluding that prefix. + +```powershell +$local = Get-AzureRmLocalNetworkGateway -Name MyLocalNetworkGWName -ResourceGroupName MyRGName ` +Set-AzureRmLocalNetworkGateway -LocalNetworkGateway $local ` +-AddressPrefix @('10.0.0.0/24','30.0.0.0/24') +``` ### To modify local network gateway IP address prefixes - existing gateway connection diff --git a/includes/vpn-gateway-table-gwtype-aggtput-include.md b/includes/vpn-gateway-table-gwtype-aggtput-include.md index 17c7ce72a2321..a864e25234342 100644 --- a/includes/vpn-gateway-table-gwtype-aggtput-include.md +++ b/includes/vpn-gateway-table-gwtype-aggtput-include.md @@ -1,4 +1,4 @@ -The following table shows the gateway types and the estimated aggregate throughput by gateway SKU. This table applies to both the Resource Manager and classic deployment models. Pricing differs between gateway SKUs. For more information, see [VPN Gateway Pricing](https://azure.microsoft.com/pricing/details/vpn-gateway). +The following table shows the gateway types and the estimated aggregate throughput by gateway SKU. This table applies to the Resource Manager and classic deployment model original SKUs (Basic, Standard, and High Performance), not the newly released SKUs. Pricing differs between gateway SKUs. For more information, see [VPN Gateway Pricing](https://azure.microsoft.com/pricing/details/vpn-gateway). Note that the UltraPerformance gateway SKU is not represented in this table. For information about the UltraPerformance SKU, see the [ExpressRoute](../articles/expressroute/expressroute-about-virtual-network-gateways.md) documentation.