hello and welcome to migrating to Google Cloud Platforms Course. In this course, you will learn about migration strategies to Google Cloud, and you will learn about Google Cloud's fundamentals, including access management, resource hierarchy and cost management. You will then learn about GPS virtual machine offering Compute engine. In addition, you will learn about virtual private clouds. Google's globally distributed network. After learning the DCP specific content, you will learn how to migrate virtual machines from your own premises environment or Amazon Web services via GPS workload. Mobility application Migrate for compute engine After learning how to migrate virtual machines to the cloud, you will learn how to manage your growing infrastructure footprint in D. C. P and put governance checks and balances in place, in addition to managing the user life cycle and machine authentication. Lastly, you will learn how to monitor and log your virtual machines and how to leverage the elasticity of the cloud with instance groups, are two scholars and load balancers
Hello and welcome to the introduction to cloud Migration module. In this module, you will learn about cloud computing characteristics, the difference between running your workloads on premise versus running them in the cloud reasons to move to the cloud and the common migration strategies to the cloud. In this video, you will learn about what defines cloud computing, the difference in financial expenditure and common reasons to move to the cloud. First Computing resource is our on demand and self service. Cloud computing Customers use an automated interface and get the processing, power storage and network they need with no human intervention. Second resources error accessible over a network from any location. Benefiting from an extensive and broad reaching infrastructure, providers allocate resources to customers from a large pool which allows customers to benefit from economies of scale. Customers don't have to know or care about the exact physical location off these resources. Resources are elastic. Customers who need more resources can get them rapidly, and when they need less, they can scale back and finally, customers pay only for what they used or reserve as they go. If you stop using resources, they simply stop paying for them
in an on premises environment. You have full ownership off the responsibility off your hardware. You have to purchase your hardware, which generally involves large asset purchases paid in a lump sum that depreciate over time. That is capital expense. In addition, you have to pay for its day to day operations like electricity and invest in maintaining and securing the hardware throughout its life cycle. From the physical security off the hardware, uh, to the premises in which they are host through the encryption, off the data on the disk and into the integrity off the network, maintaining fail overs, etcetera. These error called operational expenses. These expenses combine are two the total cost of ownership, how much it costs to run your infrastructure. When you move your application to Google Cloud Platform, you pay as you go for the resource is that you use without any upfront commitments or obligations. In addition, Google handles many of the lower level infrastructure and its security. Because of its scale, Google can deliver a higher level of security at these layers that most of the customers could not afford by themselves.
there are many reasons to move to the cloud. Here are a few examples Many enterprises have contracts with private data centers that needs to be periodically renewed or their hardware is approaching end of life. In these times, considerations like cost adjustments or other limiting factors often come up. As a result, companies tend to re evaluate their cost benefit analysis for running their workloads on premises and consider the benefits of migrating to the cloud sometimes capacities needed only at specific times of the year. For instance, during seasonal peaks during the holidays or specific events, companies can benefit from the on demand capacity during these times without paying for for during the rest of the year, thus optimizing their spend and operational efficiency. Some workloads aren't predictable and non linear, which can lead leave you with a choice between over provisioning. Resource is that tend to be underutilized during quieter times or under provisioning and compromising user experience and system stability. Thanks to the elasticity of the cloud, you can increase your capacity and resource is based on demand and dispose. The resource is when you don't need them anymore, instead of having to pay the maximum for on Prem capacity, you can adjust your capacity on demand in the cloud and pay as you go. With security threats increasing in scale and severity, companies are migrating to the cloud to mitigate the risk. Public Cloud offers vast resources for protecting you against threats more than nearly any single company Com'on vest by themselves.
In this video, you will learn the four common strategies to migrate your virtual machines to the cloud. When a company considers to migrate to the cloud, they have four options. Lift and shift, which is the main topic off this course, improve and move and rebuild. In some edge cases, workloads might need to remain on premise due to technical limitations. Lift and shift to the cloud provides access to elastic resource allocation, world leading security and pay per use model and many other cloud native features without having to rewrite the application in a lift and shift migration, you move workloads from a source environment to a target environment with minor to no modifications or factoring the modification. You apply to the workloads in order to migrate our only the minimum minimal changes needed in order for the workloads to operate in the target environment. Lift and shift can help companies get into the cloud relatively quickly and with relatively low risk. A lift and shift migration is ideal when a workload can operate, as is in the target environment, or when there is little to no business need for change because you migrate existing workloads with minimal re factoring lift and shift migrations tends to be the quickest and safest compared to other options. Lift and shift can also extend the life of the application that you already that already proved their value. Perhaps there, on aging servers that would need to be replaced or just can handle the new load. And moving to the cloud might offer a solution to some of the scaling problem with relatively little in the way off code changes and other migration strategy is an improved and move migration where you modernize the workload wild, migrating it In this type of migration, you modified the workloads to take advantage off cloud native capabilities, and not just not just to make them work in the new environment. In a rebuild migration, you decommission an existing app and completely redesigned and rewrite it as a cloud native application. If the current app isn't meeting your goals, for example, you don't want to maintain it. It's too costly and migrating it using one of the previous mentioned approaches or it's not have supported on D. C. P. You can do a rip and replace migration because because both rebuild and move and improve strategies involve code re factoring there outside of the scope of this course. Finally, sometimes keeping a specific workload on premises is the only option. It's usually because of technical limitations. For instance, dependencies on mainframe machines incompatible licensing with the cloud environment compliance constraints like data locality in a specific country that doesn't have a cloud data center unsupported operating system like Windows desktop images, T servers and virtual appliances.
migrating workloads can be complex. We broke it down to four main stages introduced in this video. The first step is assess, which helps discover your source environment. Comprehensive list, off servers, storage usage, statistics, applications, operating systems and licenses are among the key data points to evaluate when identifying which virtual machines are suitable for cloud migration. One of the main environment Discovery automation solutions on the market can help build a catalog of your source environment and even make migration recommendations and predict cost in the cloud. We will explore one of these solutions in the next module. In the prepare phase, you learn the foundational knowledge to architect and implement an infrastructure environment on Google Cloud Platform. During the migration phase, you will configure the migration process option optionally, test the solution and start the migration to the cloud while trying to minimize disruption to your service. Lastly, during the optimized phase, after the virtual machines error running successfully in the cloud, you can focus on how to make the building performance and process optimized. In this course, we started the SS phase in module to assessing the source environment where you discover the on premises environment with an easy to use automation tool that will help determine which machines to migrate, make a cost analysis and virtual machine rightsizing recommendations. We then progressed to the Google Cloud Platform specific module module, three G C P. Fundamentals and module. Four virtual machines and networks in the cloud. They will cover all you need to know in order to create a solid foundation in the cloud. If you are already familiar with U. C. P, you can skip these modules. However, if you're migrating from on premises or another cloud provider, we recommend going through these modules. Module five focuses on migrating virtual machines from the sphere on premises or easy to VMS on AWS. Using migrant for compute engine, which is Google's own, were called Migration Tool. We then discuss governance topic in Module six, like identity research hierarchy and network sharing. After the VMS error migrated, it's time to optimize. In Module seven, you will learn how to leverage the elasticity, automation and globalization off the cloud infrastructure. We will also introduce monitoring and logging using stack driver and how to interact with cloud support
in this module, you discovered how to analyze your source environment via an assessment automation tool and how it assists you in identifying Richardson machines to migrate to the cloud. You also learned how the automation to predicts the total cost of ownership off running virtual machines in the cloud and optimizes your cloud bill by providing virtual machine recommendations based on their actual utilization. In the next module, we will discuss the destination environment for your workloads. Which school cloud? We will introduce you to some Google cloud terminology how research hierarchy works and the Google cloud environment and share ways to control permissions. Using cloud identity and access management. Move on to the next module to learn more.
Hi. My name is Tad Einstein, and I'm a senior program manager here with Google. Welcome to assessing the source environment module. In this module, you will learn how to discover and analyze your source environment via an assessment automation tool. The tool will assist you in identifying virtual machines to migrate to the cloud as well as predict the total cost of ownership of running these virtual machines in the cloud. In addition, you'll be able to optimize your cloud bill by providing virtual machine recommendations based on their actual utilization.
in this module, you will learn about the assess phase and the automation tools you can use in order to discover your environment. In this course, we will focus on a specific task within the assess phase, which includes discovering and assessing which virtual machines to lift and shift. This process will help uncover existing workloads and determine a group of virtual machines for the first wave of migration. When determining which VMS to migrate, you'll probably work with other members of your organization like security and application teams doing so early in the process can help you identify and remediate or bypass issues that might otherwise occur. Mid migration There are many automation tools in the market that can help list your virtual machine inventory, predict a total cost of ownership, post migration and spot migration challenges and obstacles ahead of time. Google partners with cloud physics strata zone and cloud demised to provide you with free automated discovery tools to help you assess your environment. Each tool has distinct advantages and disadvantage, but for this course we will introduce cloud physics
In this video, you will learn about cloud physics, one of the third party solutions you can use to automatically discover your environment. Cloud physics is a solution that monitors and analyzes IT infrastructures and offers insights and reports that help you better understand your total cost of ownership, while also allowing you to analyze these actual usage of resource is over time. These capabilities enable you to make a side by side comparison between the total cost of running on premise virtual machines versus running in the cloud, and will help you choose which machines in general to migrate. Cloud physics also helps optimize your spend in the cloud by recommending the right size of VM two provisions based on their actual utilization. Cloud Physics provides an easy to deploy nonintrusive virtual software appliance that is installed within your on premise environment and just needs read only access to VM Ware V. Center Cloud physics does not deploy any probes. Error agents to your VM. Where s X? I hosts guest virtual machines or a W S E. C. Two instances. In addition, all communications error achieved through existing management interfaces and have no performance impact on your virtual machines If you're using a ws cloud, physics uses the same nonintrusive approach by leveraging a W s S APIs. Instead of harvesting data directly from the EEC to virtual machine instances, cloud physics collects data from your on premises environment using a virtual appliance called the Cloud Physics Observer. Note that if you are planning a migration from an on premise VM more environment, you need to install the virtual appliance. But if you're migrating virtual machines from A W S, you need to provide A W s identity and access management, also known as I am permissions. And no installation of cloud physics observer is required. The Cloud Physics observer is a minimum resource appliance designed to collect data from within your VM where v center through read only APIs. Process the data and share the data to cloud physics. Web portal through Secure Means Observer collects your on premise via more based performance and configuration data in order to provide you with a holistic picture of your environment, which we explore later in this module. For more information on how cloud physics handles your data, see the link attached to this video as well as installation guides for both A W S and V sphere
After installing cloud physics, you will be able to gain insight into your environment, which will help you choose the right virtual machines to migrate. In this video, you will learn how to run reports and choose the right virtual machines to migrate to the cloud. In this module, you will learn strategies and considerations for choosing which virtual machines to migrate to the cloud. There are many different kinds of applications running inside virtual machines, for instance, customer facing applications, back office developer tools and experimental apps, to name a few. There are three categories to keep in mind when choosing virtual machines to migrate. Apps that error easy to move. These have fewer dependencies are usually newer and error written internally, so they have no licensing considerations and error. Also more tolerant to scaling and other cloud patterns. Apps that error difficult to move. These have dependencies and are less tolerant to scaling or have complex licensing requirements. Apps that won't be moved. Some apps that might not be good candidates to migrate, run on specialized or older hardware have business or regulatory requirements that make it necessary for them to stay in your data center or even have complex licensing requirements that don't allow them to move to the cloud. One of the ways to sort VMS for migration is to tag them by the level of migration difficulty these application that runs on them imposes. When you install cloud physics, you can gain insight into the environment that runs your application and are then able to make data driven decisions. This data can particularly help with selecting which workloads to migrate to the cloud Cloud Physics introduces the concept of cards, which error collections of data. For instance, there's the host analysis card, which provides you with hardware related reports or the guest operating system card, which analyzes which systems within your infrastructure are running specific operating system types and versions. As we explore these cards, the main goal is to identify and tag virtual machines that error suitable for migration, harder to migrate or should just remain on premises. The Host Analysis card provides an overview of the current hardware running in your VM Ware V Center environment as well as versions of deployed VM ware S X i hyper visors. This card is useful for identifying and tagging virtual machines that run on VM Where s X i hyper visor versions that error about to enter or have already reached their end of support? Identifying these hyper visors helps mitigate common stability and security risks within your environment. Another subset of virtual machines worth paying attention to is within the host analysis card. These error virtual machines supported by VM Ware host that run on hardware that is not supported by VM Ware's hardware compatibility list, also known as an HCL. Finally, you can identify VM ware. Host that error not compatible with your minimum desirable version off the ____ hyper visor, which can also be a good candidate for migration. For instance, if your organization only allows VM Ware host running SX E version 6.7 or higher, the host analysis card can quickly identify. Host that error running older versions of S X I, which do not comply with your corporate policy. Virtual machines running on top of these out of compliance. Hyper visors can be strong candidates for cloud migration. Each workload in your data center has an operating system gaining insight into the operating system, families and versions. That error running in your data center can make you aware of the risks, opportunities and general diversity within your organization's workloads. Each environment is different, and the guest operating system analysis card can provide a filtering mechanism that will help you bulk tag or tags all virtual machines that run a specific version of an operating system. For example, if you have an operating system license agreement for specific operating system that cannot be migrated to the cloud, these guest operating system analysis card can help you easily tagged the virtual machines. Running these operating systems and mark them is unsuitable for migration. I will show you how to do that in the upcoming demo video. Another useful filtering mechanism is tagging all machines. That error currently powered off is not suitable for migration because they might not be needed in your cloud environment. Knowing the estimated cost per workload for on premises workloads is critical in order to make the business justification to move to the cloud, which is the purpose of this on premise. I t cloud simulator card Multiple variables congee, rectally affect your cost per workload. Deep insight into hardware, software management, environmental and appreciation metrics is key to establishing an accurate total cost of ownership or TCO, estimate the on premise IT cloud simulator card will do just this so that you could easily build a proper business justification for moving to the cloud. The G C P Cloud simulator card will match your virtual machines to G C P Compute engine virtual machine instantiate types, and will predict the total cost of ownership of running them in the cloud. The simulator card can include manually tag virtual machines or all the virtual machines that error compatible with G C P and also exclude virtual machines that cannot be migrated because of unsupported operating systems and other factors. When you run your infrastructure on premises, there's a tendency to be more generous with resource allocation. Most of your investment is paid up front when you buy the underlying hardware, and therefore it is not uncommon to over provisioned virtual machines as a result, in addition to the potential to create contention between virtual machines as your fleet grows, these over allocation of resource is can balloon your cloud bill with no added benefit. When you lift and shift to the cloud, you pay for what you allocate. Therefore, the proper rightsizing of virtual machine resource is based on actual use is critical. Right Sizing is leveraging analytics based recommendations for how to map on premises instances to cloud instantiate types with the ability to optimize for either performance or control. You can configure the simulation to estimate how much the migrated virtual machines will cost based on the same configuration as on premises, or you can right size based on the virtual CPU parameters. Certain performance metrics, like peak CPU usage, could be misleading. Let's say, for instance, a software update might have caused the virtual CPU usage on a particular virtual machine to temporarily spike throughout the month. In a scenario like this, the virtual CPU peak usage would not be the best indicator for how Maney virtual CP use your machine may need. It is for this reason that peak usage metrics error often considered less desirable, to determine actual virtual CPU utilization. Despite this virtual CPU peak, usage metric would ensure that your virtual machine has the necessary resource is during those times of peak use. But remember, there's a financial costs associated for allocating cloud resource is that you rarely need a better solution to properly right size, your soon to be cloud environment is to leverage the 99th and 95th percentile performance. Metrics 99th percentile refers to the maximum CPU utilization rate for 99% of the time. A system is running, filtering out any odd and infrequent CPU spikes, which would have been represented in the previously mentioned peak usage metrics. You can also think of it as 1% of the time the CPU was above. That metric 95th percentile works the same way, but determines the maximum CPU utilization rate for 95% of the time a system is running. There's a contextual decision here to make between the applications tolerance for CPU spikes and desired billing optimization. One last thing to note is that the longer the cloud physics observer could collect data, the more accurate the statistics will be. Once you migrate to the cloud, Google will provide you with rightsizing recommendations for compute engine virtual machines running in the cloud within 24 hours of their operation.
In this demo, you will learn how to analyze a cloud physics assessment to determine which virtual machines error suitable for migration. By leveraging cloud physics, you will be able to obtain a solid understanding of the cost associated with moving to Google Cloud compared to on premise and other clouds, as well as the ability to logically group resource is to ensure a well planned and optimize cloud migration. To begin, we will log into the cloud Physics web portal. Once logged in, we will be presented with a series of cards. Each card contains targeted data that is related to the assessment that we performed either in A W S or in our VM Ware on premise environment. In this case, let's take a look at what we have in our first the center summary card. The V Center summary card will show us various details about our VM Ware V Sphere environment, which is controlled by V Center. In this scenario, you can see that we're running version 55 with one virtual data center, 65 hosts and 809 virtual machines that may or may not be good candidates for migration to Google Cloud In order to get more details, we need to look at some of the other cards provided by Cloud physics to determine if a virtual machine is a good fit for migration. The next card we will look at is the host analysis card. The host analysis card provides us with in depth analytical details related to the VM, where hosts, upon which we run our virtual machines to best utilize the data in the host analysis card as well as the other cards we will use. Let's go to the filters on the left side of the screen. As you could see, you can filter by many different criteria. So there's the age of S X, your server vendor, which s X is running on top of and various other items. In our scenario, we're looking for any hosts which error running and S X version that is not compatible with version 65 As you can see, cloud physics has filtered the results to all hosts that error not compatible with S X version 6.5. Let's expand one of the host to get a little more information. This host, for example, is running SFC version 55 Express Patch 13 and as you can tell, it's listed as end of support and the messages show as such. In this scenario, we only really have two options. Either upgrade our licensing with VM Ware or move these workloads that run on this host into Google Cloud, where we do not need to worry about the VM, where licensing costs and constraints. I recommend running through many scenarios leveraging the filter feature within the host analysis and other cards to really understand the right systems to target for your cloud migration, let's go back to the main menu and look at another card. Let's take a look at the Guest OS analysis card to really get in depth details on our operating systems and workloads, which we want to target for cloud migration Upon clicking on the guest OS analysis card will be presented with various details regarding the virtual machines running in our VM ware clusters. As you can quickly see, the guest OS analysis card is broken up into several sections. The first is the Guest OS breakdown, which shows the various different operating systems that were running on our VM ware be sphere environment in addition, we have a list of end of support VM, which error running operating systems that error no longer supported by their vendor. We also have a break out of each virtual machine, as well as details related to that virtual machine and any items that require attention. Let's take a look at the filters and start applying filters that error really going to help us determine which virtual machines we want to move to. Google Cloud. Let's say we want to find all servers that error currently powered on within our VM Ware environment and that error. Also running Windows Server 2016. Since they may be good potential candidates to move to the cloud to filter those, let's click on state and select systems that error powered on. Then we will click on Guest O s name and select Windows Server 2016. As you could tell, we have a total of 86 virtual machines running Windows Server 2016 and if you look at end of support, you will see that 86 of them are unknown. What that means is, it is unknown when the end of support is since Microsoft, the vendor of this operating system has not announced it yet. Now that we've sorted these virtual machines based on if they're powered on and they're operating system, we want to tag them as a group. Since we may want to move, these is a group to do so next to VM name. Click on the tag and we will call these Win two K 16 power on Once Done click New tag and the tag will be created. We will also create one other tag. Let's choose systems that error currently powered off that error Running Windows Server 2003. As you can see, we have a single server. I have a feeling that this server is probably not the best server to migrate since it is currently powered off, and it's also running an operating system that is end of support. Because of this, I want to make sure that it is properly tagged as no move. Win two K three and click New tag. Now that we've created two tags, let's move to the next card. The next card that we will explore is the on Prem IT Cloud simulator to do so. Let's click on the card the on Prem IT Cloud simulator allows us to get a good understanding of what we're currently paying for. Running these workloads on premise. In our scenario, you may remember that we tagged two different workloads, one being Windows 2016 servers that error currently powered on and the other being Windows 2003 servers that error currently powered off. We want to take those tags and apply it to this card so we can understand those granular pricing details of what these systems error actually costing us. To do so, click on None selected underneath the Tags section and let's search for one of our tags. Let's first choose are no move, Win two K three and then we will choose our Win two K 16 Power on tag. Now that we have chosen both of these tags, we can see very quickly the annual cost that's configured. We can also add additional information here to ensure that our cost estimates are as accurate as possible. Everything from the physical host cost for the systems that we run our virtual machines on to third party annual licensing costs like our Windows servers, for example, we can also add in things like our environmental costs, like electricity, are cooling to power ratio and the average watts per hosts. Once you taylor these items, you will get a solid cost estimate of what your current on premise costs are. We can do further modeling within the if the configuration was at drop-down menu. Currently, it's set to analyze peak virtual CPU and peak virtual RAM usage and provide an annual consumed costs after this adjustment. What this means is a virtual machine throughout the course of a month, for example, may peek at certain levels for virtual CPU and virtual ram. But it may mean that it only uses those resource is once a month or in a very short time period within that time frame what you have to be careful about when moving to the cloud is that everything you use your charged for? So in these scenarios, you may not want to size for that peak usage, which may be a very small amount of time. It is for those reasons that we may want to consider leveraging the 99th percentile, or 95th percentile. The 99th percentile states that 99% of the time the CPU and RAM do not peek over certain metrics. Maybe 1% of the time, it might go over that. But the question you must ask yourself is, doesn't make sense to provisioned resource. Is that handle that 1% spike? Or can your workloads function 99% of the time within a given tolerance, As you can see, when you select 95th percentile and 99th percentile, the cost is much less expensive than it would have been if you had size for peak usage. In this scenario, I will choose the 95th percentile and then compare cloud costs once the cost comparison of on prem it verse public cloud screen loads. Our next step will be to apply our filters to ensure that our numbers error accurate. To do so, I will click on none selected and type in our two tags. As you can see, we're given a calculation of what our on Prem I t would currently cost based on the entries we supplied previously, for our cost is configured and also the ability to appropriately right sizes we've mentioned before. We can also see that RG CP costs error listed as well very quickly. You will notice that proper rightsizing and strategies with moving to the cloud will save your company a significant amount of money when done properly. Next, let's go back to our cards and select the G C P Cloud simulator card. The GDP cloud simulator card provides pricing options for moving workloads from on premise data centers into Google Cloud Platform. As we've done before, I will go to our filters and make sure to add the pre defined tags that we've created. Next we can run various scenarios related to the many options which are available when running in the cloud. For example, which location would we like these systems to run in? Is there a certain storage type that we're looking for, whether it be local, zonal or regional? We're also able to provide storage media options such as all solid state disks as well as options such as VM, right sizing, sole tenant nodes and also pricing discounts for situations where you want to commit to longer terms of using these virtual machines within Google Cloud. Now that we've gone ahead and grouped our virtual machines based on viability for moving to the cloud, analyze what the cost is to run these workloads on premise and then also provided a cost comparison to run them within Google Cloud. We should have a very good picture off. What systems error appropriate for cloud migration, both from a functionality standpoint and a cost standpoint as well. Let's now go back to the main menu Thin Cloud Physics, which lists are cards Now that you've seen how you can take your on premise VM Ware workload resource utilization data and convert that into Google Cloud pricing. Let's take a look at an AWS environment and how that would translate to Google Cloud from a pricing perspective. To do so, we will click on the AWS to G C P Cloud Migration Simulator card. Once elected, you will notice that we have assets, which are all based in A W s First, Let's go ahead and select an instance. Class R D to next. Let's select all D two servers, which error currently powered on Well, now go ahead and apply a tag called D to our on and click. New Tag will also create one more tag. Let's choose our M three instances and select an M three large. We also are making sure that these systems error all powered on. Let's go ahead and tagged them as M three large Power BI. Now that we have tagged our systems, you'll also notice that you have similar options as to what you had in the other cards, which allows you to make certain selections that error appropriate for your current environment. Next, let's go back to our main card deck. We will now click on the AWS to G. C. P. Bill analysis card. Although this card is in beta, I thought would be valuable for you to see the kind of quality data that comes from these various cards, especially something comparing to cloud providers. As you can see in this sample data, we're able to quickly filter based on AWS accounts. In this case, I'll choose these A. W s prod. In addition to various regions upon analysis, you will notice that there is matched an unmatched data Match Data's where cloud physics can match up various offerings from A W s with those of Google Cloud to perform a like for like cost analysis, unmatched data is where services may not align between two cloud providers which means that cloud physics would be unable to provide a price comparison for those services, which are not in one cloud verse another very quickly we can. See, for example, in Amazon E C two, we've approximately $4800 in cloud cost, whereas if we were to run those same workloads within Google Cloud, they would be $3700. You should be able to easily see the power that leveraging a tool like cloud physics will bring to your organization when either moving from on premise to Google Cloud or from A W S to Google Cloud. Next, let's go back to our car deck and look at the remaining cards. The next card is these A. W s bill analysis. Just like the namesake. This card allows you to drill down into your current A W s bill, which was analyze leveraging cloud physics to give you greater insight and flexibility when understanding various costs within your cloud bill. Let's go back to the car deck again and click on our last card, which is the cost comparison of on Prem it verse Public cloud. This is the same card that we've seen before when navigating through the other cards and very quickly allows us to determine the cost on Prem versus Moving to G C P. Now that you understand the purpose of the various cards within cloud physics, your follow up question is most likely what's next. Well, the next process is to reach out to Google Cloud, and we will work with you to extract your data from cloud physics and then take that data to import it into migrate for compute engine so that you can begin your migration from VM ware or a W S into Google Cloud easily and seamlessly. Assessing your environment is an important first step in your migration journey. It is important to make data driven decisions selecting the virtual machines that error suitable not only from a technical perspective but from an overall architectural point of view as well. Make sure you take into account workload, dependencies, data governance and regulatory requirements
in this module, you discovered how to analyze your source environment via an assessment automation tool and how it assists you in identifying Richardson machines to migrate to the cloud. You also learned how the automation to predicts the total cost of ownership off running virtual machines in the cloud and optimizes your cloud bill by providing virtual machine recommendations based on their actual utilization. In the next module, we will discuss the destination environment for your workloads. Which school cloud? We will introduce you to some Google cloud terminology how research hierarchy works and the Google cloud environment and share ways to control permissions. Using cloud identity and access management. Move on to the next module to learn more.
Welcome to Google Cloud Platform Fundamentals module. After discovering your source environment, analyzying your topology and carefully selecting virtual machines to migrate to the cloud, it's time to learn about the destination environment. Google Cloud Platform. We will compare the terminology that you're familiar with on premises or in AWS to the corresponding terminology on DCP. Explain how resource hierarchy works in Google cloud platform environment and discover ways to control permissions with cloud I am. You will also learn how to limit consumptions with quotas. Budgets predict cost and visualized spend over time. All of this content will provide you with the fundamental knowledge to start running your workloads in Google Cloud Platform and will be the building blocks for the rest of this course. Feel free to skip this module if you're already familiar with these concepts in Google Cloud Platform.
in this module, we compare the terminology you are used to from your source environment to Google Cloud platforms equivalent. When running virtual machines on premises, you are responsible for the hyper visor that runs them. Whether VM Ware S X I hyper V, K VM or XCN, you're also responsible for maintaining the underlying infrastructure, such as networking, storage and hardware life cycles. In addition, when running on premises, you might choose not to run hyper visor at all and installed operating systems of your choice directly on bare metal hardware with Google Cloud. Virtual machines are called compute engine instances, and they run on Google's Compute Engine services with compute engine. You do not have to manage the underlying infrastructure or hyper visor because Google maintains these services so that you can focus on virtual machines instead of the underlying systems that runs it. When providing storage on premises, you might use directly attached storage or does network attached storage or NAS or storage area network or satin to provide storage services to your machines. These storage options provide a wide array off benefits from minimizing latency with us to providing Easter views with us to providing ultra redundant, scalable, high performance storage with San DCP has a complimentary storage offering, which maps to the storage technologies that you're familiar with on premises. Whether it's Google's network attached. Persistent disc technology, which supports standard spinning hard drives, or SD s or high performance direct attached local SSD s, which supports Scotty or envy Emmy interfaces. Google Cloud provides you with VM storage as a service so that you do not have to manage the these components yourself. We will cover these storage technologies in greater detail in the next model. Typically, managing identity on premise can be can be accomplished with active directory or L'd apps servers, which must remain highly available. GCB provides a cloud based identity as a service offering called cloud Identity, with which you can either extend your current directory services to the cloud via sync or create a new user directory for your cloud environment with Google Cloud's Identity and Access Management, or I am you can control and grand granular control to your resource is in the cloud in your cloud environment. You can also sync your own practice active directory with cloud identity. We will explore these in detail later in the course
In this video, you will learn about Google cloud platforms, resource hierarchy and its characteristics. It is easier to understand GPS resource hierarchy from the bottom up. All of the resource is you use whether they are virtual machines, cloud storage buckets, tables in big query or anything else in D. C. P are organized into logical construct known as a project. In addition, projects also contained the billing information for the resource is in the project as well as maintaining I am permissions which dictate who can do what with the resource is within your project. Optionally, these projects may be organized into folders with folders being able to contain other folders. All the folders and projects are brought together under the organization node known as the organ owed. Projects, folders and organization nodes are all places where policies can be defined and inherited. Sandy CP resources let you apply policies on individual resource is to like cloud storage buckets. Let's explore these concepts in detail
all Google Cloud platform resources belonged to a Google Cloud platform. Project projects are the basis off enabling and using G C P services like managing APIs, enabling billing, adding and removing collaborators and their assigned permissions and enabling other Google services. Each project is separated is a separate logical compartment with an individual cloud resource, for instance, cloud storage pockets, virtual machine, etcetera and all of them belong to exactly one project. Different projects can have different owners and users. They're built separately and their managed separately. Each GCB project has a name and a project I D. You can choose. The project idea is a permanent unchangeable identifier, and it has to be unique. Across TCP. You will use Project IEDs in several context context to tell g C P which project you want to work with. On the other hand, project names are for your convenience and you can change them. DCP also assign each of your projects a unique project number which you will see displayed in various contexts. But using it is mostly outside of the scope of this course. In general, project IEDs are made to be human readable, and you will use them frequently to refer to projects. Project can be nested inside. Folders for those let you assign policies to resource is at the level of granularity you choose. The resource is in the folder inherent. I am policies assigned to the folder. A folder can contain projects, other folders or a combination of both. You can use folders to group projects under an organization in a hierarchy. For example, your organization might contain multiple departments, each of which has its own set off. DCP resources folders allow you to group these resources on a per department basis or in a structure that maps to your organization's business or operational model. Folders gives teams the ability to delegate administrative rights so that they can work independently but with consistency. With regards to enforcing proper cloud governance, the resource is in a folder inherent. I am policies from the folder. So if Project three and projects four are administered by the same team by design, you can apply I am policies to folder B instead, which implies these policies within all projects within Folder B. Without the ability to apply policies at the folder level, you would have to apply duplicate copies off the policy to both Project three and project for individually, which would be a tedious and error prone process. It is important to note that to use folders, you need an organizational note at the top of the hierarchy. You probably want to organize all the projects in your company into a single structure. Most companies want to have a centralized visibility off, how resources are being used, and also to apply policies centrally. This is exactly what the organization note is designed for. It is the top of the Google Cloud Platform hierarchy. There are some special roles associated with the organization. Note. For example, you can designate an organization policy administrator so that only people with privilege can change policies. You can also sign a project creator role, which is a great way to control who can spend money and delegate permissions. So how do you get an organization note In part, the answer depends whether your company is also a G suite customer. If you have a G suite domain, DCP projects will automatically belong to your organization node, which is typically the domain name off your G suite. If you do not have a G suite account, you can leverage Google Cloud identity to create the organization node. We will introduce cloud identity later in this course. Here's an example of how you might organize your resources. In this example, there are three projects, each of which uses re sources from several GC P services. You will notice in the diagram that we have not used any folders in the current organization structure. If the use of folders would be helpful in the future, we can always implement them to apply policies as needed. Resources inherent the policies off their parent resource. For instance, if you set a policy at the organizational level, it is automatically inherited by all Children projects. And this inheritance is transitive, which means that all the resources in this project inherent the policy to there is one important rule to keep in mind. The privilege granted at the higher level in this hierarchy can be taken away by a policy at the lower level. For example, suppose that a policy applied on the bookshelf project gives user pad only the right to modify a cloud storage bucket. But the policy at the organizational level says that Pat has full admin right on the cloud storage bucket. The more general policy takes effect. Keep this in mind as you design your policies.
the best place to manage your resource is is on the resource management page, which you confined under. I am an admin. This page shows all the resource is in an organization and lets you manage them. Let's go through an example of bringing on a new line of business to an organization will start by adding a folder directly under the organization called Department. Why To do this we can Just click the create folder button and type in department why a folder connects ist under the organization or under another folder based on your organization structure. For example, we could create production and development folders to help organize your projects further for each department. Since department why is a new line of business will created at the organization level just like folders? When we create a project we can choose a single parent location such as our new folder. Now that we have multiple lines of business are shared. Infrastructure Project should probably have its own folder. Let's create a new folder called Shared Infrastructure. After that we can move the project under our new folder by using the options menu. Now our lines of business and our shared infrastructure have their own independent folders. We can also manage permissions on the right hand side for the organization folders and projects. Since we've just created a new folder for our shared infrastructure team, let's make sure they have Project Creator access to this new folder, we can. Select the folder and click Add member. Let's add the shared in for group and choose Project Creator. Now members of the shared in for group can create projects under this folder, but it looks like everyone else in the organization can as well. That's because this permission is being inherited from the organization. We can. Use the toggle to see which permissions have been added directly or inherited since permissions error inherited folders. Error Also a good way to organize projects that have similar permissions requirements. Google recommends a least privilege approach, so you should only grant access to those who absolutely need it. Let's go ahead and remove the permission for everyone in the order to create projects by choosing the organization and removing that permission. Other best practices include using groups wherever possible and to test permission changes before making the actual change
In this video, I will introduce cloud identity and access management and how to use it to control and secure your cloud environment. When you build an application on your own premises infrastructure, you're responsible for the entire stack. When you move an application to Google Cloud Platform, Google handles many of the lower layers off security. This concept of shared responsibility is called the shared responsibility model and clearly defines which responsibilities are handled by the cloud provider Google and which responsibilities are handled by the customer. Because of its scale, Google can deliver a higher level of operational efficiency and security at these layers than most of its customers could afford to do on their own. As shown in the slide, the upper layer off the responsibility model remains the customers responsibility. Google provides tools such as I am to help customers implement the policies they chose at these layers. So why does identity access management? It is a way of identifying who can do what on which resource the WHO could be a person, group or application. The what refers to specific privilege or action and the resource could be any G C P services, for example, I can give you the privilege or roll off compute viewer. This provides you with read only access to get enlist, compute engine resources without being able to read the data stores on them. The WHO part often I am policy can be a Google account, a Google group, a service account or a cloud identity domain. We will explore all of these identities later in the module. They can do what part often I am. Policy is defined by a role. There are three kinds of roles in cloud. I am, Let's explore each. In turn, the primitive roles are broad. You apply them to a G C P project, and they affect all resources in that project, from virtual machines to fire URL rules, databases and logs, these are the owner, editor and viewer roles. If you're a viewer on a given resource, you can examine it but not change its state. If you're an editor, you could do everything of your I can do plus change its state. And if you're an owner, you can do everything and editor conduce to plus manage rolls and permissions on the resource. The owner role on a project let you do one more thing to we can. Set a building. Often, companies want someone to be able to control the building for project without the right change resources in the project. And that's why you can grant someone the building administrator role. But be careful. If you have several people working together on a project that contains sensitive data, primitive roles are probably two course tool. Fortunately, G C P I am provides finer grain types of roles. G C P services offer their own set of pre defined roles and their define where these roles can be applied. For example, later in this course, we'll talk more about Compute Engine, which offers virtual machines as a service. Compute engine offers a set of pre defined roles, and you can apply them to compute engine resources in a given project, a given folder or the entire organization. Another example. Consider Cloud Big Table, which is a managed database service. Cloud Big Table offers roles that can apply across an entire organization to a particular project or even an individual. Big table database instantiate Compute engines. Instance. Admin role. That's whoever has it performs a certain set of actions on a virtual machine what sort of actions those listed here listing them reading and changing their configuration and starting and stopping them. And which ritual machines? Well, that depends where the role is applied. In this example, all the years is over. A certain Google group have the role, and they have it on all the virtual machines in Project A.
Compute engine has several pre defined I am rolls. Let's look at three of these. The computer admin role provides full control of all compute engine resources. This includes all permissions that starts with compute, which means that every action for any type of compute engine resource is permitted. The network admin role contains permissions to create, modify and delete networking resources, except for fire URL rules and SSL certificates. In other words, the network admin role allows read only access to firewall rules, SSL certificates and instances to view their ephemeral I P addresses. The storage admin role contains permissions to create, modify and delete discs, images and snapshots. For example, if your company has someone who manages projects, images and you don't want them to have editor role on the project grander account. This storage admin role on the project roles are meant to represent abstract functions and our customize to align with real jobs. But what if one of these roles do not have enough permissions or you need something even finer grained? That's what costume rolls permits. A lot of companies use at least privileged model, in which each person in your organization has the minimal amount of privilege needed to do his or her job. So, for example, maybe I want to define an instance operator role to allow some users to stop and start compute engine virtual machines but not reconfigure them. Costume rolls allows me to do that. A couple of cautions about custom roles First, If you decide to use costume roles, you need to manage the permissions that make them up. Some companies decide that rather stick with pre defined roles. Second, customer roles can only be used at the project or organizational level. They can be used at the folder level. Remember that when you give a user group or service account a role on a specific element off the resource hierarchy, the resulting policy applies to the element you chose, as well as the elements below that in the hierarchy.
a service account is an account. It belongs to your application instead of an individual and user. This provides an identity for carrying out server to server interactions in a project without supplying user credentials. For example, if you write an application that interacts with cloud storage, it must first authenticate to the cloud storage APIs. You can enables service accounts and grant read. Write access to the account on the instance where you plan to run your application, then program the application to obtain credentials from the service account. Your application authenticates seamlessly to the app I without embedding any secret keys or credentials in your instance image or application code.
it's time to apply what you learned in this lab. You'll use cloud. I am to manage access control to grant access to employees and external users. You will also create a service account and assigned to it a virtual machine.
Hello and welcome. I'm Phillip Meyer, of course development with Google Cloud Platform. And this is a brief tutorial on using quick laps. In this course, you're about to use the interactive hands on labs platform called Quick Laps, which is part of Google Cloud. Quick laps allows you to get practical hands on experience with DCP and provision you with Google account credentials so you can access the G C P console at no cost. Once you reach the lab item in this course clicked the open button. You will then be prompted to provide the email that you want to use for your quick apps account. If you already have a quick let's account, you can use that email and then log in with your quick apps password if you don't have a quick apps account and you one will be created for you with the email that you provide. Once you're in quick labs, click T start lap button and wait until lab running is displayed. For each lab, you will have a timer with you remaining access time. You can see that right up here, and your lab will automatically end when the time it runs out to get started. You want to click the open Google console button and you're gonna want to sign in with the user name and password. That error provided in this pain over here. So let me copy the username. Click open Google console. I'm gonna paste in that username. I'm gonna go back and grab the password, apps it in a swell Andi quickly apps creates a new account few each time you launch a lab. Therefore, you're gonna have to click through some initial account set up windows. So in this case, I'm going to accept this over here. I don't need to do anything here. I can just click done. And then once I'm in the gcb console I can verify that I'm using the quick apps provided account and project I I first need to also agree to the terms of service right-click agree and continue. And then over here within the dashboard I can see the project. Name the project I d the project number. I can also see these project idea up here, and I wanna make sure that these error the same and that they match d connection details that we have in the quick apps page. Let me go and verify that dis here corresponds to these project idea Over here, we can. See, it's just a cut off slightly there. There we go. Eso indeed. We are using the right project on we can. Also, verify that we're using the right username. Click on this icon. Appear we can. See, we're not logged in with our own account. But with the quick apps provided a content. This is very crucial. So let's also verify that this corresponds to the username that were provided over here. Okay, Now some labs track you work within the quick apps provided GC per project. And if this is enabled, you'll see a score in the top right corner off the quick last window. As you can see right here and you score increases as each of these objectives here are met. And you can click on a score to to view, as you can see here, the individual steps. So let me go ahead and complete these activities to show you what that looks like. Now that I have completed the lab I can see that my score has been updated and I'm ready to click and lab. And once I click and lab and confirm. So the quickly provided project and any resources within that project will be deleted and I can continue learning and Pluralsight. That's it for this tutorial. Remember to use the quick laps provided credentials to sign into the G C P console. Good luck with the labs and enjoy the rest of this course.
in this lab, you used identity and access management to grant access to both a cloud identity user and an external Gmail user. You also created a service account, granted it minimal permission and assigned the service account to compute engine virtual machine. You are welcome to stay for a lab walk through. But remember that Google Cloud's user interface can change, so your environment might look slightly different now in task one, we're going to grant access to users. For that, I'm gonna first click on the navigation menu, which is the button in the upper left corner here, and then go to I am admin. Now here I'm presented with the different members and their roles. And I could also view by Rolls if I wanted to. And within their I can now fill to the table by type user and he I can see the current user that has excess, which is me. This is the account that I am locked in with, and obviously your account will be different for your own lab. Now let's add a user. I'm gonna go click on add up here, and we're gonna start with a demo user for type at quick labs dot net, We'll see that they're a bunch of Democrats in here. They all belong to the quick laps internet organization, which this project is a part off. So I can select one of these accounts, and then I need to give this a role. So if I click on select a role on their projects, I have all the primitive roles the browser editor, owner and viewer. Now these rows error pretty broad. Instead, I want to give a more pre defined role. For that. I'm going to just search for compute admin, and you can see that that gives a member full control off all compute engine resources. And if you want more details, you can go into the documentation and a list all of the different permissions that are associate with that role. So let me select that and then click safe. Now you can see that I've added that member, and if I scroll to the right a little bit here, you can see that that user or that member I should say now has the compute admin role. Okay, so that's it for task one now in task to we're going to grant access to an external adviser. So I'm gonna in the in the same menu. I'm going to again. Click on add. But now, rather than using a user, this part of the organization, we're going to type in a Gmail Google account address. So I'm going to use the one that's in the lab. You could add your own email if you wanted to. And then as a role, Let's pretend that we want to give someone else editor access. So I'm just going to give the broad, primitive role editor and now click safe, and now we can see that that's also been updated here. I see that member, and I see that that member now has editor access. So if I now had access to this Gmail address, I could log in here and I would have full added access on this project. That's it for Test two. Now in Test three, we're going to create a service account, and a service account is a special type of Google account that grants permissions to virtual machines instead of end users. So let's go ahead and do that on the left hand side. I can see here that I have service accounts so I'm going to click on that. And when we get in here, we're going to see that there are already some service accounts in here, one of which is the computer service account, which is created by default for every project. Now we're gonna go ahead and create our own service account. So I'm going to click on, create Service Account and I can give us a name. Specifically, the lab instructions are calling for a proof of concept apps. I'm going to use that exact name to make sure that I get all the credits during my lab. Now, from here on, I just click create. Now, creating a service account actually ends here, but optionally we can add permissions. Well, we really want to do that. Otherwise the service account is pretty useless. So I'm going to click on Selective role and on the filter. Let's search for storage view. And here we see starts object for your app. The permissions are read access to GCS. It's Google cloud storage objects, something like click on that and then click Continue. And then I'm gonna click on Done. So we've created a service count. But again, a service account is used by a machine. So let's go ahead and test for now and create a virtual machine with that custom service account. So I'm going to go to the navigation menu and scrolled on to Compute Engine and in here we're now going to click on Create To Creative um instantiate Now for the name it specifically P O C. Dash app. So we're going to use that now. I could also select regions own and all sorts of things. But what we're really trying to do is just select these services account that we just created. So I scrolled down under service account. I see a drop-down and I can click on that and I see the proof of concept app so I can click on that and then I can go ahead and create machine. Now this virtual machine, once it's up and running, will have those permissions off storage object viewer. So the benefit is if I have a storage bucket in this project. This machine now has access to view the objects that in there maybe needs to use some images as part of running website or something else. But either way, this machine now has access to that, and that's the end of the lab
in this video, you learn how identities are being managed In Google Cloud Platform, the topic of identity will be introduced in more detail. In Module six, many new G C P customers get started by logging into the G C P console with a Google Gmail account. This approach is easy to get started with, but its disadvantages is that your team's identities are not centrally managed. For example, if someone leaves your organization, there is no centralized way to remove their access to your cloud resource immediately. Two C p Customers who are also G suite customers can define DCP permissions in terms of G suite users and groups. This way, when someone leaves, your organization and administrator can immediately disable their account and remove them from any associated groups using the Google Admin console. G C P Customers who are not G suite customers can get these same capabilities through cloud identity. Cloud Identity lets you manage users and groups using the Google Admin console, but you do not pay for G suite collaboration. Products such as Gmail, docks, drives and Calendar Cloud identity is available in a free and a premium edition. The premium edition as capabilities for mobile device management and other advanced features. A service account is a special kind of an account that belongs to an application or virtual machine instantiate a person applications use service accounts to make authorized app. IT calls You can create, as many service accounts is you need to represent the different logical components and security boundaries off your application. A Google group is named collection of accounts and service accounts. Every group has a unique email address that is associated with the group. Google groups are convenient way to apply rolls and permissions to a collection of users. You can grant and change access controls for a whole Google group at once. Instead of granting or changing access control one at a time for individual users or service accounts, it is important to note that you cannot use cloud. I am to create or manage your users or groups. Instead, you use cloud identity, or G suite within the Google Admin panel to create and manager users using Google Cloud Directory sync, also known as G. C. D s. Your administrators can enable the capability to leverage G. C. P. Resource is using the same username and passwords your company already uses for popular directory services platforms like Microsoft Active Directory or held up. We will go into more details on G C. D. S in module six.
In this video, I will introduce the various ways you can interact with Google Cloud Platform. There are four ways you can interact with Google Cloud Platform, and we'll talk about each in turn. These console the sdk and cloud shell, the mobile app and the APIs cloud console is Google Cloud's graphical user interface, which helps you deploy scale and diagnosed production issues in a simple web based interface with cloud console, you can easily find your resources, check their health, have full management control over them and set budgets to control how much you spend on them. Search to quickly find resources and connect to instances via ssh! In the browser master. The most complex tasks with cloud gel your admin machine in the cloud where you can use the building sdk. The Google Cloud SDK is a set of tools that you can use to manage resources. An application hosted on Google Cloud Platform. These include the G cloudy tool which provides the main command line interface for Google Cloud Platform products and services, as well as G, Sutil and Splunk. When installed, all of the tools within the SDK s are located under the BIN directory Google Cloud Shell provides you with the command line. Access to your cloud resource is directly from the browser. Cloud Shell is Debian based Virtual Machine with a persistent five gigabyte home directory, which makes it easy for you to manage your GP projects and resources with cloud shell. The cloud SDK, G, Cloud Command and other utilities you need are always installed, available up to date and fully authenticated when you need them. The services that make up DCP offers application programming interface, or APIs, which allow you to programmatically control your cloud environment through code cloud. APIs provide functionality similar to cloud SDK and the cloud console and allow you to automate your workflow by using your favorite language. Use these cloud APIs with rest calls or client libraries in popular programming languages. The cloud console mobile app gives you a convenient way to discover, understand and respond to production issues. Monitor and make changes to Google Cloud Platform resources from your IOS or android device. Manage DCP resources such as project billing, app, engine apps and compute engine. Virtual machines receive and respond to alerts, helping you quickly address production affecting issues
in this lab, you explore the cloud shell environment and the G code command line interface, explore the interactive mode and used cloud shell developer tools to test a simple web application. You're welcome to stay for a lab walk through, But remember that Google Cloud's user interface can change, so your environment might look slightly different. So here I'm in the cloud console and in task one, we're going to create a virtual machine with G cod on Tatchell. So for that, the first thing I need to do is activate control. So for hover up here on this icon, you can see this activity clutch URL So I'm going to go click on that and I can collapse the navigation menu because we're not going to need that. I'm gonna prompt that if I want to continue. So I just click continue, and I'm going to resize this editor slightly. If you want. You could also open it in a new window on your end. So we're gonna wait for that to load up and here we are, and that couple interesting things before we get started. Now you can see here the first part before the at is my account. So this is the account that quickly apps has provided to me. After the at. I see that I'm using the cloud shell instance. And after that in yellow, I see the project I d. So this is the project I did I'm currently using. If this does not match the project I d. That you have in quick labs or if one isn't here, then you can use the G code conflicts that project command to change that project I d. In my case, this matches, so I'm going to move on. So the first command were given is the G code. Compute instances create my VM machine type. So I'm gonna piece it in here and run that. So g code refers to the _____ command line. Compute refers to compute resource is specifically instances. Create is the action. My VM is the name off the instance that I'm going to give it. And then the machine tab flag specifies that this is an n one standard to instance which, if you look at the documentation, is an instance with two virtual CPUs and 7.5 gigabytes of memory. Now it's asking me if I want to create that in this zone. Sometimes it could also ask you for a list of zones and you have to pick one. So I'm just going to click. Yes, let's use that zone. So now I'm going to wait for that instance to be created, and then we're going to move on to test, too, to get information on commands. So here we go. It's telling me the name of the VM Zone machine type and its internal external I P address, and the machine is running great. So let's say you're new to this command. How would you get some more information about it? Well, there is a help flag that you could put in after create so I could type G code. Compute instances, create dash dash help. So far around this, I now go into the documentation essentially off this command, and you could also look this up externally, and I could use enter or the space bar to kind of scroll through. The center's line by line in space bar are bigger sections and once sometimes I can use queue to get out of here. Now the help flag provides over both output for commands If you want a short summary, you can use just the H flag. So let me run that again with just a dash H and I'm going to click enter and now you get a sort of form. And again, this is also all on the documentation so you can reference it there as well. So now moving on to Task three, in which we're going to explore the interactive mode. So again, let's assume you're new to all these commands and you want a little bit of help when using them. So for that you can front g cloud better interactive to start a d interactive mode, and here it is that started. And now we're going to try to run these same command. But we're going to type it out slowly and see the suggestions that were being given. So I'm going to start with G Cloud. And then if I had space, I'm giving a long list, and I could tap my way through here to look for what the resource is I'm trying to create. So compute. If I hover over that, I see that this lets me create, configure and manipulate compute engine, virtual machine instances. Great. That's what I'm looking for. So let me head space. Now what specific I'm trying to do within computers. Create an instance already. Know that. But if I didn't, I could scroll through this list there A lot of different things in here because this lets you actually create more than just compute resources. I could create disks. I could create firewall rules, forwarding rules, health checks, images and so on. But here I can see instances so I can hit space again. And now comes the action. Well, what am I trying to do? Let's assume we want to get some more information about a VM. There's actually describe command. I could just test and say, Hey, is there a describe commanded? Yeah, there is great. So let me use that. The next thing I need to provide is the instance name that's I'm being told that down here now I just created a VM, so I should be just able to say my VM. And if I just have em, it's actually searching for all the VMS that error part of this project that I've attitudes telling you. Hey, there's your VM, so I'm going to click tab again and then click enter. And now it's going to refer to a specific zone. Is that the V I'm looking for? Like yes, and it's giving me an explanation. Or I should say the description off my virtual machine. So there's a lot more detail than when we got earlier when we created the machine. And once I'm done in here, I can click F nine to quit. So now I'm back to the command line Now, in task four, we're going to use collateral for testing. The contents of your collateral home directory persist across Google cloud projects between all collateral sessions, even after the collateral virtual machine terminates and is restarted. So what we're going to do is we're gonna actually use that machine now and tested first. We're going to run a command GS. You told copy and copy from a called search Bucket is public accessible. Some sample code, specifically a sample app. And so here we can see it being copied. And now it's copied to our directory. So what I'm going to do now is I'm going to run that this is just some note GS code. So I'm going to run that and telling me now that the server IHS listening on Port 80 80 so I can. What I can do now is I can click up here on web preview and preview on Port 80 80 and we'll get these sample code displayed, which is hello from your cultural. So this shows you that while you're using code Shell, you can also easily test some of your code or run it here before running it on a virtual machine that you created. So this is the cultural is really there for you to run commands to create things, but also for testing. That's the end of lab.
In this video, I will present the way building works on Google Cloud Platform, how to control spend with quotas and how to leverage the power of labels. Although I am policies are inherent, it top to bottom. Billing is accumulated from the bottom up, as you can see on the right, resource consumption is measured in quantities like rate of use or time, number of items or feature use. Because the resource belongs to only one project, a project accumulates the consumption of all its resources. Each project is associated with one billion account, which means that an organization note contains all billing accounts. Let's explore organizations, projects and resource is, and more cloud resources have near unlimited capacity. And since you pay for what you consume, quotas protect you from unintentional expenditure. That's the reason all resources in D. C P. R. Subject to project quotas or limits where their purpose is to encourage you to make capacity planning a priority by setting upper limits for resources which can be consumed within the your project. A good example is decreasing a quota on the amount of V C P. Use from the default off 24 to 6 for a proof of concept or test project, therefore controlling the monthly bill. As a result, if your project exceeds particular quota while using a service, the platform will return an error. Given these quotas, you may be wondering, How do I spend one of these 96 core VM. As your use of GDP expands over time, your quotas may increase accordingly. If you expect a notable upcoming increasing usage, you can proactively request quotas adjustments from the quotas page in the DCP console. This page will also display your current quotas. Project quotas prevent runaway consumption in case of an error or a malicious attack. For example, imagine you accidentally created 100 instead of 10. Compute engine instances using the G cloud command line. Having quotas in place can protect you from the scenario. Quotas also prevent billing spikes or surprises code, as are related to building. But you will go through how to set up budgets and alerts later, which will really help you manage building efficiently. Finally, quotas, force sizing consideration and periodic review. For example, do you really need 96 core instance, or can you go with a smaller and cheaper alternative? It is also important to mention that quotas are the maximum amount of resource is you can create for that specific resource. As long as these resources error available, quotas do not guarantee that the resource will be available at all times. For example, if a region is out of local SSD s, you cannot create local SSD s in that region, even if you still had quota for local SSD s.
Because H G. C P Service has its own pricing model, we recommend using the G C P pricing calculator to estimate the cost of a collection off resources. The pricing calculator is a Web based tool that allows you to specify expected consumption off certain services and resources. You will receive an estimated cost of the utilization off these resources as the output from the pricing calculator. For example, you can specify an n one standard one VM instance in the U. S central one, along with 100 gigabyte of egress traffic to America's and email pricing calculator, then returns the total estimated cost. You can adjust the currency and timeframe to meet your needs. And when you're done, you can email these estimates or save it as a specific URL for a future references to help with project planning and controlling costs, you can set a budget setting. A budget lets you track How you spend is growing towards that amount. These screenshots shows you budget creation interface. Set a budget name and specify which project this budget applies to, said the budget at a specific amount or match it to the previous month. Spend determine your budget percentage alert. These alerts send email to billing admin after spend exceeds a percentage off the budget or a specified amount. In our case, it would send an email when spending reaches 50 90 and 100% off the budget amount. You can even choose to send an alert when you spend When you spend is forecasted to exceed the percentage off the budget amount by the end of the budget period. Here is an example of an email notification. The email contains the project name, the percent of budget that was exceeded and the budget amount. It's worth mentioning that you can also respond to budget notifications programmatically using web hooks so you can develop your own solution. Email isn't always the best way to stay up to date on your cloud costs, particularly if your budget is critical and time sensitive. You can use programmatic notifications to forward your budget messages to other mediums and to automate cost management
labels are utility for organizing UCP resources Labels are key value Pairs that you can attach to your resource is like GM's discs, snapshots and images. You can create and manage labels using the DCP console, G Cloud or the Resource Manager app I, and each resource can have up to 64 labels. For example, you can create a label to define the environment off your virtual machine. Then you define the label for each of your instances as either production or test. Using this label, you can search and list all your production resources for inventory purposes. Labels can also be used in script to help analyze, cost or run bulk operations on multiple resources. The screenshot on the right shows an example off four labels that are created on an instance. Let's go over some examples of what to use labels for. We recommend adding labels based on the team or cost center to distinguish instances owned by different teams. You can use this type of label for cost accounting or budgeting, for example, team marketing and team research. You can also use labels to distinguish components. For example, component read this component front end again. You can label based on environment or stage. You should also consider using labels to define an owner or a primary contact for a resource. For example, owner Lisa contact OPM or at labels to your resource to define their state. For example, State in use, state ready for division. You can even visualize Spend over time with data studio data studio turns your data into infirm, a tive dashboards and reports that are easy to read, easy to share and fully customizable. For example, you can slice and dice your billing reports using your labels.
in this lab, you learn how to view building reports in the Google Cloud console using a sample building account. In addition, you viewed your current and forecasted GDP cost at project product and SK you level. Lastly, you analyze cost using report filters to identify cost drivers and trends. Example of report filters included projects, products, SK use location and credits. You're welcome to stay for a lab walk through. But remember that Google Cloud's user interface can change so your environment might look slightly different. Now this lab is a little different and that we're looking at a building report for a sample building account. As you can see here, I'm already locked in using the link in the live instructions. If you down and up on the screen, you can easily navigate to it by clicking on the navigation menu, going to building and then reports. I've also collapsed a couple of these windows here, so you see that there's a navigation menu. Since we don't need that for a large chunk of the lab, I'm gonna just collapse that, and we're also seeing the filters by default, and we'll use those in a second. But for now I want a little bit more real state here. So in the first section, we're going to look into how much am I spending so in here on the top left we can See, for the current month and the current date As of this recording, this is the spent that we have so far. And we even have a little bit off a calculation here off how this is higher compared to previous spend that we had in a similar time frame in April. So you can see that are spent is actually going up a little bit now. It's also interesting is on the right. We actually get a forecast, the total cost. So if we have our over that, we see that this uses historical data specifically from 4, 13 2020 25 25 2020. And if we scroll down, we see here a chart we can switch between having this be a line chart or a bar chart. And what this does is it shows us the actual spend these error the different colors by projects. If a hover over that, you see the actual spend for each of the projects. So there seemed to be four projects in here, Dev Prod Storage and sandbox. And we also see this dotted line, which is the forecast that spent. So it seems like in a previous time period, I had to spend right around here in the middle of the month, which I don't have this time. But I have these other spikes that we're gonna investigate a little bit more later. But for now, we're just focusing on the fact that we can see the total costs by Project we can also scroll down and see that on each project here, we can highlight that as well here to see what that corresponds to. And then we have to start a line which, as you can see here, is the cost trend and it tells you what historical data it is using to come up with that cost. Trent. So what we're going to do next is we're going to filter things a little bit. So on the top right here, I'm going to click on show filter and there are lots of different options that we're going to explore. For example, one thing I could do is, rather than just going by the current month, I can specify the last month or specific time frame and rather by usage data. I can also go by invoice month. So maybe I wanna look at from the beginning of the year to now look at all those invoices and then we see all those spikes again, as as I mentioned, we will explore those soon. If we scroll down, we have other filters down here, for example, we could exclude the tax on this project. We also have discounts that we could look into, and we even get time ranges and let's actually look into those a little bit. So on the time range, what I can do is let's go to usage data, and the lab asked us to explore the last 30 days. So let's go do that when you click on Custom Range and just look at the last 30 days and so that we can see do spikes again that keep happening Now we can also filter by a location. So for school here and expand this location area, you see that I can filter by a geography or also by region and multi region. So maybe I wanna look at cost and let's say, just the Americas and it shows me instantly what those regions and multi regions correspond to. Okay, so you can see there's a lot of costs there. Azure Pacific. There's some cost to, but really not that much. There are spikes in here, but if you look at the actual cost, it's just in a dollar total. And I have a click on your app. You can see there's also definitely some cost here, and so you can easily toggle between all of these. So I'm actually gonna turn this all off so that we're looking at all the locations. Now. We can also look at our credits of our scroll down there, a couple of different credits here. One of those is these sustained use discount. So the question mark here and I can have her over to explain this. That says, these sustained use discounts or automatic discounts that you get for running specific compute engine resource is at a significant portion of the building month. So essentially you can read about this in documentation. But the longer that you want an instance for in a given month, the higher off a discount you get. So essentially, you just get a discount for actually using a virtual machine a lot throughout a specific building month, and by using, I really mean having it running. If that machine is higher or lower CPU usage, that doesn't really correspond to any spend you just spend for the CPU as long as it's running and the ram as well as the disk and other things associated to that virtual machine. Maybe next colonel I P. Address or something else so we can easily talk all this off. So right now, I can see that my total cost is just about $200. And if I toggle to sustained use discount off, I could see that without that discount, I would be actually paying about $50 more. So you can easily see the impact of having these discounts their automatic, so you don't have to do anything to get these. Now let's explore a little bit what our cost drivers are. So as I mentioned, we have these peaks in here, and if I have over them, I see that they correspond to a specific project that blue project, the deaf project. That's where these peaks these larger peaks error coming. And we also have these smaller peaks here, and they correspond. It looks like to our storage. So what we're going to do now is we're going to use the filters to first filter for the specific project Deaf Project. That's how we get those spikes and I can still see that here and now, rather than grouping by project, because I only have one project. Let's group byproduct Now if I do that, I can see that big query is actually a large cost driver here. So if I have over right there, the big orange, you can see that on May $2033.82 were from Big query. Well, that's great. But maybe I'd like to get a better understanding of what specifically in Big Query is causing these costs. So what I can do is back to the filter rather than grouping by product, I can also grew by sq or skew, and there you can now see that analysis is what's causing this. And I have an I d. I could look more into what are all the different things that correspond to this. But here you see, an actual usage number on that corresponds to these spikes, and that's why we have that higher cost. So now the only other thing we're gonna look at is sort of a summary of my costs. You could keep exploring this if you want to. Again. This is a demo billing accounts. So there's actual data that we have in here. Your report for your own building will obviously look very different. But what I'm gonna do is I'm just also going to show you the cost of breakdown by navigating on the left hand side and this just give me a little bit of a higher view off my total spend and my credit so far. And if I scroll down I can see that again? These are sustained use discounts. They're also spending based discounts. And if we have over here, it just tells you the details. Discounts that are applied after a contractual spending threshold has been reached. So you can actually have long term commitment plans off a year or three years. And you would also get discounts on those. But again, the sustained use discounts error just automatic, disconsolate, applied for running specific compute engine resource is a significant portion off the building month, so feel free to keep exploring this. There's a lot of data in here, but from my end, this is the end of the lab.
in this module. We compared the terminology from your source environment to Google Cloud's equivalent, for example, that virtual machines are called Compute Engine Instances on Google Cloud. You also learned how resource hierarchy levels to find trust boundaries and the Google code environment. Finally, you were introduced to cloud identity and access management and how you can use it to control and secure your cloud environment. In the next module, we will show you how you can leverage Google's physical infrastructure by creating and configuring your own virtual private cloud network. In addition to explaining how to control access to your network with firewall rules and how to create sub nets, we will identify the types of virtual machines that you can create in compute engine and discuss how to choose the right configuration for your needs based on configuration and cost. Before you can start a migration to go cloud need to create a secure connection between your on premises and your VPC. We will introduce you to the range of interconnect options offered by Google Cloud. Move on to the next module to learn more
welcome to virtual machines and networks in the cloud module. In this module, you will learn how to create a virtual private cloud your network hosted on GC piece infrastructure. You will also learn how to control access to your network with viral rules and how to create sub nets. You would then learn how to create and manage virtual machines in the compute engine, choose the right configuration and understand compute engines pricing model. Lastly, you will learn how to create a connection between your source environment and virtual private cloud.
In this video, you will learn about Google cloud platforms. Physical geographic distribution regions are independent geographical area that consists of zones a region is usually referred to by a continent, a cardinal direction and a number. For example, your quest to is a region in London Zone is a single physical data center. Most regions have three or more zones to ensure redundancy. Regional resource like an external i P address is available to all the zones in the region and benefits from a higher degree of resilience. ING Zone should be considered a single failure domain within a region. Google design zones to be independent of each other Zone has power cooling, networking and control planes that error isolated from one another, which ensures a higher level of resiliency. Putting resource is in different zones in a region, provides isolation from most types of physical infrastructure and control plane failures. Putting resource is in different regions. Provide an even higher degree of failure Independence. This allows you to design robust systems with resource is spread across different failure domains. Code zones have high bandwidth, low latency network connections to each other. You are free to create a higher resilience, topography with minimal compromise on performance. An example of a zonal resources. Compute engine virtual machine. In this example, we have three virtual machines, two in region A and one in Region B that all service and user traffic. This design ensures a higher level of resiliency because the failure domains are geographically spread. Remember that zones are collection of data centers grouped together in a specific geographic area. Spreading your front and virtual machines across different regions provides better resilience and also better coverage, because region be might be physically closer to your users and therefore might reduce latency. Google Cloud Platform services are globally distributed across North America, South America, Europe, Asia and Australia. These locations are divided into regions and zones. You can choose where to locate your application to meet your latency availability and durability requirements. Since you only pay for what you consume, you have access to a globally distributed infrastructure without paying for the upfront investment. Google Cloud Platform resources are designed to leverage their geographical distribution to create resilient and scalable solution such as our global network. Google has built a large specialized data network to link all of its data centers together so that content can be replicated or traveled across multiple sites and services. Congee delivered closest to the end user. It is designed from the ground up to give customers high speed throughput and reliably low latency further applications. The network infrastructure is composed of edge points of presence, which are where Google's network connects to the rest of the internet. DCP Kenbrell. It's traffic closer to its users because it operates an extensive global network of interconnected points. This reduces costs and provides users with a better experience. In this illustration, The Blue Line represents the private submarine fiber optic cables that connects all the resources across the globe.
In this video, you will learn how to leverage Google's physical infrastructure by creating and configuring your own virtual private cloud network. G C P projects. Our global compartment that encompasses services and resource is under a single administrative unit. It is also where you associate billing, control your expenditure with quotas and enable APIs. Each project comes with a default virtual private cloud, or VPC, which is a global network. If you're looking for a way to segregate, your resource is under the same administrative unit, you can use more than one VPC in each project. The default VPC is a global network spending all available regions across the world that we showed earlier that provides you with one cloud based, interconnected network that is literally exist anywhere in the world. Azure your app America's All Simultaneously all resources inside the VPC can communicate over an RFC 1918 private. IT ranges out of the box and discover one another using the global internal DNA services. Inside the network, you can segregate your resource is with regional sub networks, which have I P ranges associated with um sub networks spends the zones that make up a region that means that you can have re sources in different zones on the same subject, which makes management and fault tolerance a lot easier. In this example, the VPC has a sub knit in the US East. One region, the I P Range, spans across all zones in the region. So both virtual machines you see on the screen are part of the same sub net, despite the fact that they run in different zones. Noticed that the 1st and 2nd address is in the range 10.0 dot 0.0 and 10 0.0 dot one are reserved for the network and the sub nets gateway, respectively. This makes the 1st and 2nd available addresses 0.2 and three, which are assigned to the virtual machine instances. The other reserve. The dresses in every submit are the second to last address in the range, and the last address, which are reserved as the broadcast address to summarize every subject, has four reserved I P addresses in its primary I P range
So far you have learned about the Default Network, which is one of the three VPC network types in Google Cloud Platform. Every project is provisioned with a default VPC network that comes with the pre set of sub nets and firewall rules. Specifically, a sudden it is allocated for each region with a non overlapping I P range and Fire URL rules that allows ingress traffic for ICMP Rdp and SSH! Traffic from anywhere, as well as an ingress traffic from within the default network for all protocols and ports. We recommend using the default VPC for prototyping and testing purposes rather than production workloads in an automotive network. One subject from each region is automatically created. The default network is actually an automotive network that you can manually add and have greater freedom to modify. These automatically created sub nets use a set of pre defined I P ranges with a slash 20 mask that can be expanded up to a slash 16. All of the sudden, it's feats within the 10 1 28 00 slash nine cider block. Therefore, as a new G C P regions becomes available, new subjects in these regions are automatically added to an autumn oh network using an I p range from that block. Acosta. More network does not automatically create sub nets. This type of network provides you with a complete control over the sub nets and the I P ranges. You decide which submits to create in regions you choose and using I p Rangers, you specify within the RFC 1918 address space. These app ranges cannot overlap between sub nets of the same network. This network is recommended for production because it assumes no implicit trust and gives you maximum control over its layout. It is also recommended network. If you want to interconnect your VPC network with other networks because you have control over the I P address layout, you can convert an auto mode network to a customer network to take advantage of the control that customer networks provide. However, these conversion is one way, meaning that customer networks cannot be changed to an Ottoman networks so carefully review the considerations off auto mode networks to help you decide which type of network meets your needs.
let me show you how to expand a sub net within G c. P. I've gone ahead and created a custom sub net here with a slash 29 mask and a slash finance mask provides eight addresses but four of those error reserved, leaving me with another four for my VM instances. So I can click in here, you'll see that I already have four VM instances. So let's try to create another VM instance in the submit how the sudden it is in us West one. So I'm gonna go create an instance and specify the region for that without changing anything else. I'm just gonna try to click create Hi. So as we can. See, I did not get a green check mark next to my instance. Instead, I get an exclamation sign. And if I have over that, I get the actual error that the I P space of the specific segment has been exhausted. So, as expected, I can only create four VM instances with a slash 29 mask. But I want to show you how to expand that. So let's go ahead and go to the sub net, and I'm gonna directly click on my sub net and click the edit I can up here to now change this address range. So let's, for example, change that to a slash 24. Let's going to allow over 200 instances in here, and I'm just going to click safe. I'm doing all of this without taking any of my VM instances Down and we can see appear in the notification that this is currently being updated and we can see here that it's now reflected and I have these slash 24. So let's go back to my instance Page, and what I'm gonna do is I'm actually just gonna hit this retry button over here to see if I can recreate that instance within that sub net now that have expanded it. All right, so we can see that the instance has been critics successfully and it's running. And that's how easy it is to expand a submit in CCP without any workload, shutdown or downtime
VPC s give you a globally distributed firewall you can utilize to control access to instances both incoming and outgoing traffic. You can define firewall rules in terms of network tags on virtual machine instances, which makes administration really convenient. For example, you can tag all your web servers would say web and write a fire will rule saying that traffic on ports 80 and 443 is allowed into all the EMS with the web tag, no matter what their I p address is or where they're located. DCP Fire World rules are state ful. That means that fire URL rules allow bi directional communication once a session is established. Lastly, it's important to mention that all networks comes with an implicit rule in the absence of viral rules in the network. There is still an implied deny all ingress rule and an implied allow all egress rule on the network. Here are the parameters Viral rules have the direction of the rule in about connections are matched against ingress rules. Only an outbound connections are matched against egress rules. Only the protocol in port off the connection, which can be a single port like 80 or multiple ones, like 80 and 443 the priority of the rule which governs the order in which rules are evaluated. The lower the number, the more priority the rule has over others.
let's apply some of the network features we just discussed in the lab. In this lab, you create an automotive VPC network with firewall rules and to VM instances. Then you convert the autumn oh network to a customer network and create other custom mode networks. As shown in this network diagram. You also explore the collectivity across networks.
in this lab you export the default network and determined that you cannot create VM instances without a VPC network. So you created a new automotive BBC network with sub nets roots firewall rules and to VM instances and tested connective ity for those VM instances. Because automotive networks aren't recommended for production, you converted the automotive network to a customer network. Next you created to more customize VPC networks with five all rules and VM instances using the G C P console and the G cloud cmdlet. Then you test the D connectivity across VPC networks, which worked when you paint external I p addresses, but not when you paint internal I p addresses. VPC networks are by default isolated private networking debates. Therefore, no internal I p address communication is allowed between networks. Unless you set up mechanisms such as vpc peering or a VPN connection, you can stay for a lab walk through. But remember that ccps user interface can change, so your environment might look slightly different. Alright, so here I am in the gcb console, and the first thing I'm gonna do is I'm just going to explore the default network. So if I on the left hand side, click on the navigation menu and scroll down to VPC Network. We will see that this project has a default network. Every project has a default network. Um, that is unless you have an organizational policy that prevents this default network from being created. But essentially all the different projects that used through quickly apps will always have this. So in here, we can see we have a different subset in each of the different regions. All of these error private I p addresses. I can also go to the routes and these error established automatically with the network. So we can sea routes between the sub nets as well as to the default route to the internet. And we could even look at the firewall rules. The default network comes with some preset viral rules to allow ICMP traffic from anywhere, um, Rdp, traffic as well as this is eight and then also all protocols imports within the network. So this is the range off the network. So we also allow all traffic from within the network itself. So let's go ahead. And that's actually delete these filed rules. I can. Just check them all right. Here on, delete them. Let's just assume that we wanna get rid of everything that's been Creek created for us and just create our own network instead. So I'm gonna go ahead and delete these. I can look at the status up here. We could see that all for being deleted. I'll update us. Each is being deleted. And once that is done, which is now I can head toothy network, Select the Default Network and we're also just going to delete that entire network. And once we did like this network, we should see that there should be no route without a network because there's no use case for them. So let's just wait for the network to be deleted, and then we'll verify that. So we can again see the Progress bar up here. That's the leading you can also hit. Refresh on this just should just take a couple seconds. You can see that some of the refreshing. Some of the subjects are disappearing. It's actually just deleting them all. All these subjects first and then it's getting rid of these network is a whole, because the network is really nothing else and just a combination of subjects so all these subjects have to be deleted. There we go. They're all gone now and now it's just the network itself. Um, that is remaining. If I go to routes, we should see that all the routes already gone because without the sun, that's there's really no need for the routes. And if I go back to the network, we should see that any moment now. The network itself also disappears. There we go. All right. So without a VPC network now, we shouldn't be able to create any VM instances, containers or app engine application. Let's actually verify that I'm going to go to the navigation menu. Go to compute engine on. Let's just try to create an instance, just going to click. Create going to. Leave everything as it's default. Um, if I go actually under networking, we should see that it's going to complain here for click on networking that actually doesn't have a local network available. But let's just click, create see what happens. And it does indeed give us an era and point out the fact that this tab has an issue, so we clearly cannot create an instance because we going to again. These instances live in networks and without a network can create it. So let's head cancel. And what we're gonna do now is we're going to create our own automotive network. So I'm gonna head back to VPC Networks and you can pin by the way, um, these services I'm just gonna pin vpc network compute engine because we're gonna be going back and forth between these and then within VPC network. We're just not going to create our own network. I can give it a name. I'm gonna use the same name that I have in a lot of instructions, which is my network. Now I have the option of creating a customer or an automatic, uh, let's start off by creating an automatic network. So that's going to. Priest said, All of the different subjects for us and all the different regions that are available. You can scroll through those and see those all in here. They have a pre set aside a range. You can expand that side. Arrange later. Um, but again, as an auto network, you don't define these actual i p address range. They're also firewall rules, error available. What's interesting here is you see that there's a deny all ingress and and allow all ego spiral rule. So these error here by default and they actually implied you can't even like unchecked. Um, so these error actually, with all networks that you create and you can see that this has the highest party integer, which really means it's the lowest priority. So by default, all ingress traffic is denied. And all egos traffic is allowed unless we create other fire rules to say differently. So if I check, all these boxes were now allowing English traffic for these i p ranges and these protocols imports. So let's go ahead and click create. And we're gonna wait for that network to be created. And then we're gonna look at the I p addresses for, um, two of the different regions and we're gonna create instances in those regions and verify that it's taking those I p addresses so you can see the substance already all populated here. I can monitor the progress also up here, but this is really done any second now, I'm actually going to start heading over to compute engine on to create our instances. So let's click create. I'm going to give it a name. My Net U S V m. These is gonna be in U S central one specifically the zone. See, I don't really need a big machine. Were just doing some testing here, So let me just create a micro that reduces the cost a little bit. Andi, I'm going to now click create, and then we're going to repeat I can close this panel over here, uh, the same workflow and create an instance in your app. So going to grab the name from the lab instructions for that. I'm going to select the your app West one region, specifically the Zone one c again. A micro machine which is just a shared core. And click create for that as well. We can see the U. S central one C machine is already up. We also see the IAP these internal I p address that has been provided again. There are some reserved I p addresses. The 0.0 is reserved as well as the 0.1. So in both of these ranges the dot to the first available address. Now we can verify that these error part off the right sub net. If I click on it, zero I go to the network interface details and he we can see it's part off this something network. Now something that work in this case has the same name as the network because this is an auto network and here we can see that it's part off this range, so 10 1 28 00 slash 20 Let's verify that that is correct. We are in there with a dot to on that's verified that the other should be now a 10 1 32.0 slash 20. So again, click on it. Zero go to the sub network and we can. See, that's true. And you can also see here that the, uh this address scissors error for the gateway. Right? So that that way that do are two was really the first usable address within that range. So now these are on the same network. So let's verify some collectivity between those I'm going to grab these the internal I P address off minor UVM. Just copy that and we're gonna ssh to these other instantiate. So again, these instances are in two separate regions but in the same network, so we should be able to ping these addresses now, So effect ping three times using the internal address. When you see that this works, this works because we have that allow internal final rule that we selected earlier. I can actually repeat the same by using the name of the instance, and you can see that it's taking that name. It's actually has here the fully qualified domain name. And it's just, um, using the i p address for that. So VPC networks have an internal DNA service that allows you to address instances by the DNS names instead of the internal I P addresses. And that's very useful because, well, this internal IT, Peters, could change. Right? But the name is not going to change, so it's always good to be aware of that that you can use these fully qualified domain name to pick those. Alright, now, we can try this whole thing the other way around, Let me exit this instance. Grab the internal I P address off the instance in the U S. And ssh to the instance in your app on. We're also gonna ping the internal I p address here. We can See, that works. We could even now try to ping the external i p address. So that's 34 in my case 67 18 18. And that works as well. And the reason that I'm able to pin the external is because I have five URL rule that allows ICMP externally and I can verify those again. If I click on the network interface details here, I can see all of the fire world rules on the fact that what filters they have and what protocols imports. All right, so this is all works fine. And let's assume that this workflow has worked for us. But now we have decided that we want to convert the autumn a network that we have to a customer network. Um, so let's go ahead and do that. We're gonna go to VPC Networks and we're going to click on my network, and then we're going to click on edit, and we're gonna change these something that creation mode from auto to custom and hit safe. Okay, so now we can navigate back. You can see that this is in progress up here. The mode still says, auto, we could have also flipped that here. Let's wait for that to be refreshed. And now we can see, um, that this mode is not this step network is now a custom sub net. Okay, so let's say that this has worked so far. And now we realized that we need a couple more networks and there's a network diagram in the lab that has to other networks a zealous, some instances and everything. So let's go ahead and create those. So now we're gonna go to create VPC network. We're gonna create the management, that network. And rather than starting with automatic and converting, we're just going to start with the custom net. For that, we have to define each of these sub nets. The minimum information we need to provide is a name these region. So let's select your central one and then the i p address range and then can click done. And now I can add if I wanted to another sub net. Um, but the other thing that's very interesting about this is I'm creating this right now through the G C P console. But you can also create networks as well. A sub nets from the command line using classical. And if I click down here on command line, I'm actually provided with the commands to do that. The first one just creates the network itself. Um, you don't have to use the project flag in here, so we could just say G cloud compute networks create the name of the network and the fact that these something it is a custom mode, and similarly, then we create these sub nets, which is network substance, create the name of the substance. That's something that itself the name of the network, these region and the range. Okay, So again, that's the sort of minimal information Let's just hit, click close and create, and we'll create the other one from the command line. So it's creating that network and in parallel I can go now, Activate Cloud Show by clicking up here the right corner. Yes, I want to start using Cloud Shell. I'm just gonna make a little bit bigger. And once this is up, we're going to use those commands that we just saw to create first a network, and this is gonna be the private net, which is also off the mode custom. And once we have that, we're going to create two sub nets within that network so you can see, by the way, in the console to the other network was created. Private net is being created right now here and once that is ready, we can add the two subjects to that. Today we go, there's a sub net. It's also telling us, Hey, this new network, um, you don't have any firewall rules here. Some commands. If you want to create some firewall rules, we'll do that in a second. Um, let's just create these something that's in here. So first, we're gonna create one in the US, and then we're also going to create one in your app. If you wanted to speed this up, you could actually launch another collateral session. Now that the network is up, you could create these substance sort of in peril. But we're just gonna wait for this to complete, and then we'll piece that command in there and you can monitor all of us and the console for click Refresh that we see it. It's also completed. It just returns. I've done exactly what you told me. Let's create the other one apps. I didn't copy the command correctly. There we go. This is now in the in your app, specifically your app west. One apps the wrong button there. Refresh we can. See that's already being created there. Eso we can definitely, You know, display all of those in the G C P console. You can Also, if you click these button over here include Shell. You can actually open this in a new window. It's actually opens it in a new tab that where you preserve your real state, you can keep focusing on the console as well as focusing on Cloud Shell. So let me actually create some real state by just clearing this and then _____ command to list all the networks with your G Cloud Computer networks list so we can see them three networks. They're all custom we can dig deeper into this by also listing the sub networks and using these sort by command to sort these by network. So now we'll see. My network has a lot of substance because it used to be in auto mode and then from management, we've won something that and for permanent, we have two subjects. All right, so now we're gonna go create some firewall rules. So let's click on firewall rules up here. You can see the ones that already there create firewall rule. We'll repeat the same process. We did early will first create this using the console, and then we'll repeat the fouls for different network using Kochel. So let me give it a name. Let's make sure I select the right network that the fire will real applies to. Let's just do all instances and for the I P Ranges select all addresses and I'm allowing in this case I see MPs, Sensation Rdp, Select me Define ICMP and then 22 for Isis. Age 33 89 for Rdp and now down here I can click on command line. You can see this is one long command again you don't need to define the project. Flags is G Cloud Compute Firewalls create the name of the rule. The fact that's in in Gris party that is also actually default. We could leave that out. Importantly, is the name of the network. Action allows the fall to and then the rules as well as the source rangers. So let's create that in the console, and we'll grab the command from the lab instructions to do the same for out of network. So here you can see paste that in, and that should now create the other final rule for us and we can monitor the firewall rules in here in the console as well as in code Shell. So we'll run a command to list all the firewall rules in a second. Um, so they're all created If we list them we can see them all here. Never refresh this we can also see them right here. All right, so now it's time to create some more instances and then explore the connectivity. So let's head back to compute engine. I'm going to create instances in these new networks that created Let me click create instantiate and I'm actually gonna close clouds all for now. All right, let me just make it smaller going to provide a name and use central one c. A small machine is very fine. And now importantly, I need to expand this option down here to select the right network. We have three options right now, and it has actually pre selected that network. That's because from an order it's listed up top. That is correct. So let's click done. And there's again a cmdlet way. There's a lot of information here that we don't need. You'll see that the second we run our command like the boot disk. We're selecting a lot of standard options, so let's just hit, create and let's pull the command from the lab that creates the same in a different network. And that's G code computer instantiate create the name of the instance The zone, the machine type of sub net That is these bare minimum that we need to provide. So let's run that. You can see the other instances already created I can Refresh this. See that? The other instances already coming up to and then once collateral is updated we can list all the instances. Let's do that here, consort them by zone, or we could sort them by network. So we can see, in one zone here we have instantiate. Then in another thing, we have three instances. Now keep in mind, these instances are in different networks and we can display that if we go to columns and check network, you can see that these instances, with exception off the minor, these error in the same BBC network, the others are indifferent. And that's going to now going to the connectivity that we're gonna explore going to try to again Ping i P addresses both external internal and see what works So let me grab these management Us vm external I p address on We're gonna S S h to the mine IT us vm Now they are in the same zone, but they're in different networks. So let's see if we can ping the external i p address and then we'll try the internal so external works. That's because we set up the firewall rules for that. But if I now on I can also do the same for Private Net a complex that I p address in which is 35 only 88 money 2 20 networks as well. Okay, so you can pick those, even though I don't different networks now, from an internal perspective, I should only be able to Ping mei net uvm, which we actually tried earlier. Ready. So let me just hop on the other ones when I try 10 1 30 02 we can see that that's not leading to anything. We should be getting 100% packet loss and then we'll try the same from the other one. Yeah, so 1 72 that's 16.0 dot two and we can see that that again isn't working either, right? So even though this instance is in the same zone as these other instances, I'm trying to ping the fact that they are in a different network does not allow me to ping on the internal i p unless we set up other mechanisms such as vpc peering or a VPN, and that's the end of the lab.
in this module, you will learn about Google Cloud's managed virtual machine offering Compute Engine. Google's compute engine provides infrastructure as a service. That means that Google Cloud Platform manages the physical servers and services for you, providing you with familiar virtual machines without the need to maintain the infrastructure that runs them. Compute Engine provides you with a lot of configuration options, and he only pay for what you pra vision with no up front cost. Essentially, you choose how much memory and how much sleep you you want. You choose the type of disc you want, whether you want to just use a standard Docker, SSD or local SSD or a mixture of both. You can even configure networking interfaces and choose the operating system to run on your virtual machine in D. C. P. Each virtual machine can have to i p addresses. One of them is an internal I P address, which is assigned via the internal D. H D. P. When you create a virtual machine in D. C. P. It's symbolic name is registered with an internal DNA service that translates the name to the internal I P address across the global network DNA is scopes to the network so that it can translate web, URL s and virtual machine names to their host in the same network. But it cannot translate host names from virtual machines in different network. The other I P address is the external I P address, which is optional. You can assign an external I P address if your device or machine is externally facing that external I. P address can be assigned from a pool making it ephemeral, or you can assign a reserved external i p address making it static.
I just mentioned that VMS can have internal and external I p addresses. Let's explore this in the G C P console. So I'm gonna go ahead and here and try to create a VM. And I'm just going to leave all these as the default. But when I focus on is the networking interface. So I'm going to expand this menu right here and click on networking. So here we see that this is currently being created in the default network. We see the except sub network, and that matches the region that I've chosen, which I just left to the default. And specifically, we see that this has a slash 20 mask here and that really allows for a lot of addresses. Um, actually, 4000 and 96 addresses to be exact and four of those you can't use, so you'll be able to create over 4000 VM instances in this sub net. So let me go ahead. Now, if I focus on the primary internal app, you can see this by default. Ephemeral. I could also custom input an internal i p address or could actually reserve one, meaning that if I restarted and stop this VM IT still have the same type. The same goes for the external IOPS and more over here. I actually don't even need an external app. So as I just mentioned, only internal IOPS error required for VM instances. I could actually say that I don't want an external i p address altogether, but let's leave that for now. And let me just create this instance to share the internal and external I P addresses that get generated. Hi. So here we are. The instance has been created. You can see I have the internal i p address, and that is within that subject mass that we just looked at. And I also have the external I p address. Now, I want to prove to you this this external I p address is ephemeral. And the way I'm going to do that is I'm just gonna click on this instance. I'm going to stop it. All right, so here we can see that the instance has not been stopped. The internal I p addresses actually still here with the external appeared this is gone. And if I now go ahead and restart that, it's gonna ask me if I'm sure I want to do that because I get billed for that. Yes, I do want to restart this instance. We're not gonna wait for that instance to start back up. And here we go the instances starting. But we can already see that the external I p address is no different. So this demonstrate that every VM needs an internal i p address. But external i p addresses error optional on by default. There are from URL.
standard machine types are suitable for tasks that has a balance between CPU and memory needs. Standard machine types have 3.75 gigabyte of memory. Pervy CPU, for instance, a to V CPU machine will have 7.5 gigabytes of RAM, and a four V CPU machine will have 15 gigabytes of RAM. Each of these machines support a maximum off more than 100 persistent discs, with a total persistent disc size off 64 terabytes, which is also the case for the rest of the machine types. Hi memory machine types are ideal for tasks that require more memory relative to V CPU. For example, if your application saves a lot of data in memory for faster access, we recommend using this machine. Family Hi memory machine types have 6.5 gigabytes of system memory. Pervy CPU memory optimized machine types are ideal for tasks that requires intensive use of memory with higher memory to see p ratio than high memory machine types. These machine types are perfectly suited for in memory databases and in memory analytics such as sap, hana and business warehousing, workloads, genomic analysis and SQL Analysis services, memory optimized machine types have more than 14 gigabytes of memory. Purvis app. UI For virtual machines that rely more computational power, choose the high CPU machine family, which comes with more V CPU relative to memory. High CPU machine types has 0.9 gigabytes of memory. Pervy CPU. It is important to note that these machines do not have higher clock rate but simply has more V CPUs program. If you're looking for more performance pervy CPU, the next family compute optimize will be your better choice. Computer optimized machine types offered the highest performance per core on compute engine built on the latest generation until scalable processor. Cascade Lake C two machine types offer up to 3.8 gigahertz. Sustained all court herbal and provides full transparency into the architecture off the underlying service platform. Enabling advanced performance tuning C two machine types offers much more computational power run on newer platforms and are generally more robust for compute intensive workloads than the high CPU machine types. Shared. Core machine types provide one V CPU that is allowed to run for a portion off the time on a single hardware. Hyper threaded on the host CPU running your instance shared core instances can be more cost effective for running small, non resource intensive applications than other machine types. There are only to shared core machine types to choose from F one Micro and G one small. These machine types offer bursting capabilities that allow instances to use additional physical CPU for short periods of time. First thing happens automatically when you're instantiate choirs, more physical CPU than originally allocated during these spikes, your instance will opportunistically take advantage of available physical CPU in bursts. Note that burst are not permanent and our only possible periodically if none of the pre defined machine types matches your needs. Google Cloud Platform provides a unique feature called costume machine types, where you can independently specify the number of the CPUs and the amount of memory. For your instance. Costume machine types are ideal for the following scenarios when you have workloads that requires more processing power or more memory, but don't need all the upgrades that are provided by the next largest pre defined machine type
Google Cloud Platform charges these use of your virtual machine per second after the first minute, which means you only pay for the time your machine was running. Sustained use discounts are automatic discounts that you get for running specific compute engine resources, ___ use memory or GPU devices, for example, for a significant portion off the month. For example, when you run one of these resources for more than 25% of a month, compute engine automatically gives you a discount for every incremental minute you use. For that instance, that discount increases with usage and you can get up to 30% net discount for instances that run the entire month. Compute Engine also offers the ability to purchase committed used contracts in return for higher discounted prices for virtual machine usage. This option resembles the on premises cost model, where most of the compute investment is paid up front. These discounts are known as committed. Use discount. If your workload is stable and predictable, you can purchase a specific amount of V CPUs and memory for up to 57% discount off normal prices in return for committing to a use term off one year or three years. A pre empt able virtual machine is an instance that you can create and run at much lower price than normal instances. The printable virtual machine uses access, compute engine resources and therefore its availability varies uninterrupted. The virtual machine will operate up to 24 hours and will turn itself off. However, a point increase in demand off the excess resource is the preemptive. All virtual machine will be terminated within up to 30 seconds. Printable machines are great for non operational workloads like batch processes or best efforts services. If you have workloads that requires physical isolation from other workloads or virtual machines in order to meet compliance or licensing requirements considered sole tenant notes. A sole tenant node is a physical compute engine server that is dedicated to host virtual machine instances only for your specific project. You sold at a note to keep your instances physically separated from instances in other projects, or to group your instances together on the same host hardware. For example, if you have a payment processing workload that needs to be isolated to meet compliance requirements, the diagram on the left shows normal hosts with most with multiple virtual machine instances from multiple customers. A sole tenant node, as shown on the right, also has multiple virtual instances, but they all belong to the same project. You can also feel the node with multiple smaller virtual machine instances of various sizes, including custom machine types.
In this module, you will learn about persistent discs and network interface controllers. Persistent discs resembles Ice Cassie discs in on premise. Persistent discs are networked attached block storage. As the name implies, your data is persistent, meaning that your data is durable and outlives the machines life cycle. Persistent discs are zonal resource, and you can even have a dual zone discs for redundancy. The first persistent disks that a virtual machine attach is, too, is that boot disk, which is where the operating system exists. You can have more than one persistent disk, and there is a direct correlation between disc size and its performance. That means that if you need more speech from your persistent disk, you can scale its eyes to match precisely the speed you need. Persistent discs comes in two flavors. HDD and SSDI. The choice comes down to cost and performance. HD is great for long tail files or general bulk data that does not need fast performance, and it is more economic option per gigabyte. SSD is designed for random reason rights and provides better performance for databases. Another feature off persistent discs is that you can dynamically resize them even while they're running and attached to the VM and therefore also benefit from the increase in performance. As we saw in the last slide, you can also attach a disk in read only mode to multiple virtual machines. This allows you to share static data between multiple instances, which is cheaper than replicating your data to unique discs. For individual instances, by default, compute engine encrypts all data addressed TCP handles and managers dis encryption for you without any additional actions on your part. However, if you want to control and manage this encryption yourself, you can either use Cloud Key management service to create and manage key encryption keys or create and manage your own encryption keys as customers supplies encryption. Compute Engine also provides physically attached SSD s called local SSD s because they're locally attached. These discs are considered ephemeral but provides very high IOPS. Data on this disk will survive a reset but not a virtual machine stop or terminate, because these disks cannot be reattached to a different virtual machine. Currently, you can attach up to eight local SSD discs with 375 gigabytes each, resulting in a total of three terabytes. The persistent discs offers data redundancy because the data on each persistent disk is distributed across several physical disks. We recommend using a persistent HDD disc when you need an economic storage solution, and performance requirements are relatively low. If you have a high performance requirements or your workloads rely more heavily on random reads and writes like databases, we recommend SSD options for non persistent storage. Local SSD provides the highest throughput and lowest latency because they're physically attached your virtual machine. There are many differences between a physical hard disk from your own premises environment and a compute engine persistent disk, which is essentially a virtual network attached device. First of all, if you remember with normal computer hardware disks, you have to petition them. Essentially, you have a Docker, and you're carving up a section for the operating system to get its own capacity. If you want to grow it, you have to re partition. And if you want to make changes, you might even have to reformat. If you want redundancy, you might create a redundant disc array, and if you want encryption, you need to encrypt the files before writing them to the disk with cloud persistent, persistent disks. Things error very different because all that management is handled for you on the back end, you can simply grow discs and resize the file system. Because discs are virtual network devices, redundancy and snapshot services are built in and discs are automatically encrypted. Each compute engine virtual machine comes with a virtual network interface controller or V. Nick. The overall network, through Pluralsight URL machine scales at two gigabits per second per V CPU up to 32 gigabits per second with 16 v CPU course. Because persistent discs are accessed over the network instead of physically attached to the virtual machine. They're also using the allocated network bandwidth a machine has you can have up to aid network interface controllers, each attached to different VPC network. For example, if you want to have a network appliance that has one network interface controller in a D m Z vpc and one in your internal VPC, one important aspect to remember is that once a virtual machine is created, you cannot make any modifications to the network interfaces. That means that if you want to change the number of Nick's, connect them to different networks or Edenic, you'll have to recreate the virtual machine
Let's take some of the compute engine concerts we just discussed and applied them in the lab. In this lab, you explore virtual machine instantiation by creating several standard VMS and a custom bm. You also connect to those VMS using both Ssh for Linux machines and Rdp for Windows machines.
in this lab, you created several virtual machine instances of different types with different characteristics. Specifically, you created a small utility VM for administration purposes, a Windows VM and a custom Linux via you also access both the Windows and Linux VM and deleted all your created GM's in general. Start with a smaller VM when you're prototyping solutions to keep the cost down when you're ready for production, trade up to larger VMS based on capacity. If you're building and redundancy for availability, remember to allocate excess capacity to meet performance requirements. Finally, consider using custom VMS. When you applications requirements fit between the features of these standard types, you can stay for a lab walk through. But remember that G CPS user interface can change so your environment might look slightly different. So in the G C P console, I'm going to navigate to compute engine and then VM instances and in here we're going to create. Now we can defined a name. There's this small question mark here, and if you have are over it, it can tell you a little bit more about some of the restrictions you have in regards to creating a name. Choosing a name that ISS, and I'm just gonna call this my utility VM, and we're gonna go with some of the options that actually went over a little bit in the demo. But we obviously can choose regions and zones has changed zone to what the lab is instructing, which is one c and then for the machine type we have, you know, a lot of different options to choose from we can see that the cost changes if I, you know, scale up to a machine with four virtual CPUs versus a machine that's just maybe a micro, which is a shared core machine. So the cost can change quite drastically. So let's just leave all the remaining settings on click create. And once the machine is up and running, we're going to explore the different VM details that we have. So we're gonna go into the VM Instances page and look at things like the CPU platform, availability policies and so on. So let me do that. Let me click on utility VM because it's now in a running state and I'm gonna look for CPU platform. You could see that right here on if I click edit, you'll see that. Actually, I'm unable to modify that. Okay, so that's because I can't do that. While the instances running there are other things I could do. I could change the final rules. I can ad network tags eso Certain things are available to change while and instances running. In some cases, you have to stop the instance to change UM, some of the properties and other cases. You cannot actually even change it unless you delete it. One of those is, for example, the network interfaces. If you have multiple network interfaces, you'd have to recreate your instance. The good thing is you could keep your boot disk and just reattach that boot disk later on. Now I can also go look at the availability policies so it's cool down. Let's talk about what theon house maintenance is by default. It's set to migrant Davey M. Instance, and that's recommended. But you could set this determinate the instance, and it's also going to automatically restart that instance so you could configure that as well. Okay, so this is just a little bit exploring the different options. I'm gonna go click cancel, And what we're gonna do now is explore some of the VM logs. So if I'm looking at the detail page here, we want to get a little bit more information about the monitoring options are available. We can click monitoring here, and we'll get more information about the CPU. This instance is barely runs. We don't have much data yet. We get information about the network bites and packets. This guy, Oh Onda we can. Also, if we go back to details, look at stack driver logging. So this is now a different user interface, and here we now have individual logs that we can explore and we can view options here. We could expand all these and dig into all of these different logs that error in here and even within their expand each of the logs to get more information. So this uses Tector over logging. We'll cover this feature little bit more a later course in the course series if you're interested to learn more about both the logging piece that we just looked at as well as the monitoring. So let's go to test to we're now going to create a Windows virtual machine, so I'm going to go back through the navigation menu, compute engine to VM instances on. I'm not going to create another instance. So we're gonna define a name and this is just gonna call this Windows VM, and we're going to choose a different region and zone this time. Why don't we put this into your app West to and specifically designed to a Let's pick a larger machine? Let's pick one that has to virtual CPUs on day 7.5 gigabytes of memory. And we can even go ahead now and change the boot disk, because by default, this would be a Linux machine. So if we want to change this because we want to create a Windows machine and specifically the lab is instructing me to look for the Windows Server 2016 Data center core image its first cool down. See that image right here can change their boot disk. Maybe I want some higher IOPS I can choose an SSD and I could even make this larger and click select, and all of that again is going to affect obviously the cost. I have the cost of the machine. I have the cost of the disc. But the new thing I have now also is the image I've chosen supreme image means there's a cost associated with using that image. But it's build all together for you, so you can see that cost broken up right here. Now the other thing we're gonna do is we're going to allow specific traffic, http and https traffic. This just creates a network tag for us and then creates file rules on the network tag so that we can enable traffic on those ports for the TCP Protocol. So let's hit, create and create this instance. And one thing we'll notice when the instance comes up is that under the Connect column, rather than now seeing an ssh button, which is what we would have for a Linux machine, we should now see an rdp, which is for the remote desktop protocol. And so that's how you would access a Windows machine. Now, the important thing is there you obviously want to, you know, configure your user name and password eso that only authorized users access, Um, that machine. So here you can see the rdp, but no. And what we're going to do now is we're gonna click onto the machine and set the windows password. You can actually also do this by clicking down here. You could set Windows password there as well. Um, so, actually, let's just do that way. Eso you have a username here? It's, um, taking the username that I have, um, for my lab account. So this is the username right now. So I can set that, Andi, Then it's going to be provide me with a password. So there we go. So I could now copy that password. And if I use an rdp connection I can, then get into that. This is a little bit outside of the scope for this lab. But if you want to and haven't already be client, you can actually install one through chrome through an extension. You could access that instance that way and then configure it and do anything else you wanted to in this Windows virtual machine. So let me go ahead and close that I'm going to move on to task three now, which is to create a custom virtual machine. So I'm gonna go back to create instance, I had to find a name which is called my custom VM. Um, I'll follow the lab instructions here for setting these region and zone, which is us West one B And now, rather than choosing a specific machine type, I can go in here and just select custom as the machine type and then defined the exact numbers off course of memory. So let's say my specifications are I want six virtual CPU and you can see how the scales, by the way, they're only certain options. You can choose and goes all the way to 96 So let me choose six here. It's going to scale that memory automatically force It gives us a range. Now, depending on that CPU, there's an option to extend the memory so you could get more than 39. You see all the way to 624 and this is a separate option. We'll talk more about this in the slides, so let me choose 32 A. Rather than scrolling here, I could also just type the value in and on. That's also going to Adjust the cost now. Now, sometimes it's important to note that your custom machine, maybe between two machine to apps error actually already provided the custom machine is generally going to be a slightly more expensive eso. If you have a custom, a standard machine that's very close to the custom machine. It's definitely something you would want to consider. And once the machine runs more than 24 hours, you'll get right sides recommendations. So I'll tell you, if the machine is too small or too large and make recommendations based on that, let's go ahead and create that. And once it's up and running, we're going to ssh to the machine. We're going to run some commands on that machine, and that's actually going to wrap up the lab for us. Now, with any new project, you get this column here on the right hand side to help you get started because we're using Quick Labs generated projects. They're always gonna be new projects. So you will see this throughout the training. You can certainly leverage this if you want, but I'm going to collapsed app, right? So VMS up and running, let me ssh to it on. Then we're going to run the free command to see information about any unused unused memory and swap space. So let me type free so we can see that here and that lines up with the mummy selections that we made in the machine I can also see get more information or details about the ram installed. Okay, so here we get more information about that, a swell, okay. And I can verify the number of processors. So that should have been six. And yep, And process sixth grade we can see details about the CPU itself. So here we get information about the architecture. Er the right order. Which model? Exactly. So you can get all of this information about any VM that you create. And you can also get more information with this in the documentation. Depending on which region and zone you choose, you'll have different architectures and different models available to choose from. Okay, so that's all we wanted to show you here with this lab. Um, you went ahead and created that virtual machine, the utility VM. We created a Windows VM, and then we created a custom virtual machine and verified that whatever custom settings we applied were actually used to create the machine by running commands within that machine
before starting in migration to Google Cloud Platform. You need to create a secure connection between your own premises and your VPC disconnection will support your migration process and also enhance the interoperability between your existing on premises, workloads and your cloud environment. Many customers start with the virtual private network connection over the Internet using the I P SEC protocol. The VPN connection is relatively easy to set up and doesn't require a physical connection between your own premises and a Google Cloud Platforms data center. On the other hand, you might not want to use the internet either because of security concerns or because you need more reliable bandwidth and lower latency. Google Cloud Interconnect is a letter to private connection to your VPC. If your own premises topology allows it, you can use the dedicated interconnect which connects your own premises to one of our data centers directly. Partner Interconnect provides collectivity between your own premises network and your VPC network through a supported service provider. A partner interconnect connection is useful if your data center is in a physical location that can't reach a dedicated interconnect co location facility or if your data needs don't want an entire 10 gigabit connection, let's explore all these options in detail