More

Adding point layer in QGIS from SQL Server failing


I just downloaded QGIS a few days ago (version 2.8.2-Wien) but I am not able to get it to work as I thought it would. I tried pulling in spatial data from sql server (a "geometry" column containing points). I click the button that says "Add MSSQL Spatial Layer" and find my point table but when I click the "Add" button it gives me this error (after a minute):

dbname='myDb' host=localhostsqlserver2014 srid=4326 type=POINT table="dbo"."Locations_Sample" (spatial_data_point) sql= is an invalid layer and cannot be loaded. Please check the message log for further info.

The "log message" words in that error were a link in QGIS so I clicked it and it opened a "Log Messages" panel and on that panel (on the "General" tab), it just had the same error message I typed above (with one inconsequential text difference… it says "is an invalid layer - not loaded" instead of "and cannot be loaded").

I tried making that source table ("Locations_Sample") smaller (it was around 120,000 points so I tried just using 100 points) and that smaller table imported with no problems. I then tried increasing the amount of data in that table and hitting the "Refresh" button in QGIS to find what the row limit was but it succeeded in refreshing every time I tried all the way up to the full 120,000 rows (showing all the points on the map as far as I can tell). This may seem like a workaround but I don't think it really is because every time I close and re-open the project, it errors when trying to load that layer… also other things (like the heatmap raster generator) are failing possibly due to this "invalid layer" issue.

FYI, I subsequently tried loading the table with 100 records at first and then putting all the records in the source table "Locations_Sample" and that always worked. Also I think possibly the issue with the heatmap raster generation is that it only works with the initial records I load (100 points).

How can I figure out what the problem is with this layer (and ideally fix it)?


Thank you for your help! (Nathan helped me on the QGIS issue tracking page with the same advice he posted here and that mapBaker also posted). I added a unique int column to this table and then it loaded the layer with no problem.

Thanks again.


Cant load layers from SQL Server in geoserver

I've tried to load a layer from sql server in geoserver but the layers doesnt load and the browser doesnt give any error.

My layer is created like this:

And when I try to preview my layer with openlayer in geoserver my layer doest load, and the box where is supposed see my layers remains white.


2 Answers 2

To the question you asked:

I wouldn't rely on disk queue alone. In fact I rarely even ever look at disk queue lengths unless I'm getting in deep with a problem. It is best to look at your disk's latency. Those are the Avg. Disk Sec/Read (or /Write and /Transfer) counters. That tells you what your disk latency is from Windows' perspective. So the time that the request was taking after sent to the disk and brought back.. Disk Queuing nowadays doesn't tell you a lot because most IO subsystems are able to handle a disk queue depth and have multiple spindles doing work in your RAID group often. Finally - In this case - your disk queue length doesn't even look that bad. From here it looks like the max it was in the time of this screenshot (for the average length) was 1.377. That's nothing on most SQL Server systems. Look at your actual latency. Also I don't look at % Disk Time.. I look at the idle time instead. That is a more reliable counter and you just have to do a little math to read it.. The more idle, the less activity.

To The General Question Behind Your Question

I'll ask this one by starting with a question - Why did you go right to your IO? There could be any number of things causing your slowdown. And to answer that exhaustively here is tough but a high level of some things to look at/consider:

  • Are you experiencing blocking? I would download SP_Whoisactive and have a look at that while you are getting these errors. Do you see blocking? Do you see the query behind the request(s) that are timing out? What is the duration?
    • Have you analyzed your SQL Server Wait Stats to see what your chief cause of waits are?
    • Do you know which query or queries are causing the timeouts? If so can you look at those and see if there is any room for tuning?

    There could be many other things here. This could be on the connection or network. It could be blocking, it could be a need for index tuning and query tuning, it could be that you expect the queries to take longer than the default 30 second timeout, etc.

    But I'd try and gather more data and then choose a path to go down. This is an old whitepaper but it is very useful to performance tuning by waits. While there will be new wait type Tom didn't mention in this paper, it still very much applies and will help you out.


    SQL Agent embedded PowerShell script in CmdExec step fails with import-module sqlps

    I am trying to create a SQL Agent job that dynamically backs up all non-corrupted SSAS databases on an instance without the use of SSIS. In my SQL Agent job, when I create a CmdExec step and point to a PowerShell script file (.ps1) like this:

    the job executes successfully (or at least gets far enough to only encounter logic or other syntax issues).

    This approach won't work for a final solution, because there is a requirement to keep the PowerShell script internal to SQL. So I have a different CmdExec step that embeds the PowerShell script like so:

    However, when executed with the embedded script, the job errors out quickly with the following response:

    The specified module 'sqlps' was not loaded because no valid module file was found in any module directory.

    Why can't I reference the module from an embedded script, but doing so in a ps1 file works just fine?


    SharePoint store everything in the content database. In each content database there are many tables, which hold the information.

    Let's say you have site collection A in content DB 01, So when user creates site under a site collection A it store all the information from content to security in the Content DB 01. For specifically the user permission related.

    Groups: Table that holds information about all the SharePoint groups in each site collection.

    Roles:Table that holds information about all the SharePoint roles (permission levels) for each site.

    GroupMembership: Table that holds information about all the SharePoint group members.

    RoleAssignment"Table that holds information about all the users or SharePoint groups that are assigned to roles.

    Here is great blog on technet epxlains important table in a content DB. Inside a SharePoint Content DB


    Solution Design

    This section contains the following:

    · Storage Configuration for SQL Guest VMs

    Introduction

    This section details the architectural components of Cisco HyperFlex, a hyperconverged system to host Microsoft SQL Server databases in a virtual environment. Figure 8 depicts a sample Cisco HyperFlex hyperconverged reference architecture comprising HX-Series All-NVMe data nodes.

    Cisco HyperFlex is composed of a pair of Cisco UCS Fabric Interconnects along with up to sixteen HX-Series All-NVMe data nodes per cluster. Up to 16 compute-only servers can also be added per HyperFlex cluster. Adding Cisco UCS rack mount servers and/or Cisco UCS 5108 Blade chassis, which house Cisco UCS blade servers allows for additional compute resources in an extended cluster design. Up to eight separate HX clusters can be installed under a single pair of Fabric Interconnects. The two Fabric Interconnects connect to every HX-Series rack mount server, and connect to every Cisco UCS 5108 blade chassis, and Cisco UCS rack mount server. Upstream network connections, also referred as “north bound” network, are made from the Fabric Interconnects to the customer datacenter network at the time of installation. In the above reference diagram, a pair of Cisco Nexus 9000 series switches are used and configured as vPC pairs for high availability. For more details on physical connectivity of HX-Series services, compute-only servers, Fabric Interconnect to the north bound network, please refer to the Physical Topology section of the Cisco HyperFlex 4.0 for Virtual Server Infrastructure with VMware ESXi CVD.

    Infrastructure services such as Active Directory, DNS, NTP and VMWare vCenter are typically installed outside the HyperFlex cluster. Customers can leverage these existing services deploying and managing the HyperFlex cluster.

    The HyperFlex storage solution has several data protection techniques, as explained in detail in the Technology overview section, one of which is data replication which needs to be configured on HyperFlex cluster creation. Based on the specific performance and data protection requirements, customer can choose either a replication factor of two (RF2) or three (RF3). For the solution validation (described in the “Solution Testing and Validation” later in this document), we had configured the test HyperFlex cluster to be of replication factor 3 (RF3).

    As described in the earlier Technology Overview section, Cisco HyperFlex distributed file system software runs inside a controller VM, which gets installed on each cluster node. These controller VMs pool and manage all the storage devices and exposes the underlying storage as NFS mount points to the VMware ESXi hypervisors. The ESXi hypervisors exposes these NFS mount points as datastores to the guest virtual machines to store their data.

    For this document, validation is done only on HXAF220c-M5N All-NVMe converged nodes, which act as both compute and storage node.

    Logical Network Design

    In the Cisco HyperFlex All-NVMe system, Cisco VIC 1387 is used to provide the required logical network interfaces on each host in the cluster. The communication pathways in the Cisco HyperFlex system can be categorized in to four different traffic zones as described below.

    · Management Zone: This zone comprises the connections needed to manage the physical hardware, the hypervisor hosts, and the storage platform controller virtual machines (SCVM). These interfaces and IP addresses need to be available to all staff who will administer the HX system, throughout the LAN/WAN. This zone must provide access to Domain Name System (DNS) and Network Time Protocol (NTP) services and allow Secure Shell (SSH) communication. In this zone are multiple physical and virtual components:

    - Fabric Interconnect management ports.

    - Cisco UCS external management interfaces used by the servers, which answer via the FI management ports.

    - ESXi host management interfaces.

    - Storage Controller VM management interfaces.

    - A roaming HX cluster management interface.

    - Storage Controller VM Management interfaces.

    · VM Zone: This zone is comprised of the connections needed to service network IO to the guest VMs that will run inside the HyperFlex hyperconverged system. This zone typically contains multiple VLANs that are trunked to the Cisco UCS Fabric Interconnects via the network uplinks and tagged with 802.1Q VLAN IDs. These interfaces and IP addresses need to be available to all staff and other computer endpoints which need to communicate with the guest VMs in the HX system, throughout the LAN/WAN.

    · Storage Zone: This zone comprises the connections used by the Cisco HX Data Platform software, ESXi hosts, and the storage controller VMs to service the HX Distributed Data Filesystem. These interfaces and IP addresses always need to be able to communicate with each other for proper operation. During normal operation, this traffic all occurs within the Cisco UCS domain, however there are hardware failure scenarios where this traffic would need to traverse the network northbound of the Cisco UCS domain. For that reason, the VLAN used for HX storage traffic must be able to traverse the network uplinks from the Cisco UCS domain, reaching FI A from FI B, and vice-versa. This zone is primarily jumbo frame traffic therefore jumbo frames must be enabled on the Cisco UCS uplinks. In this zone are multiple components:

    - A teamed interface is used for storage traffic on each Hyper-V host in the HX cluster.

    - Storage Controller VM storage interfaces.

    - A roaming HX cluster storage interface.

    · vMotion Zone: This zone comprises the connections used by the ESXi hosts to enable live migration of the guest VMs from host to host. During normal operation, this traffic all occurs within the Cisco UCS domain, however there are hardware failure scenarios where this traffic would need to traverse the network northbound of the Cisco UCS domain. For that reason, the VLAN used for HX live migration traffic must be able to traverse the network uplinks from the Cisco UCS domain, reaching FI A from FI B, and vice-versa.

    By leveraging Cisco UCS vNIC templates, LAN connectivity policies and vNIC placement policies in service profile, eight vNICs are carved out from Cisco VIC 1387 on each HX-Series server for network traffic zones mentioned above. Every HX-Series server will detect the network interfaces in the same order, and they will always be connected to the same VLANs via the same network fabrics. Table 1 lists the vNICs and other configuration details used in the solution.

    Figure 9 illustrates the logical network design of a HX-Series server of HyperFlex cluster.

    As shown in Figure 9, four virtual standard switches are configured for four traffic zones. Each virtual switch is configured with two vNICs and are connected to both the Fabric Interconnects. The vNICs are configured in active and standby fashion for Storage, Management and vMotion networks. However, for VM network virtual switch vNICs are configured in active and active fashion. This ensures that the data path for guest VMs traffic has aggregated bandwidth for the specific traffic type.

    Jumbo frames are enabled for:

    · Storage traffic: Enabling jumbo frames on the Storage traffic zone would benefit in the following SQL server database use case scenarios:

    - Heavy write SQL server guest VMs caused by the activities such as database restoring, rebuilding indexes, importing data and so on.

    - Heavy read SQL server guest VMs caused by the typical maintenance activities such as backup database, export data, report queries, rebuilding indexes and so on.

    · vMotion traffic: Enabling jumbo frames on vMotion traffic zone help the system quickly failover the SQL VMs to other hosts there by, reducing the overall database downtime.

    Creating a separate logical network (using two dedicated vNICs) for guest VMs is beneficial with the following advantages:

    · Isolating guest VM traffic from other traffic such as management, HX replication and so on.

    · A dedicated MAC pool can be assigned to each vNIC, which would simplify troubleshooting the connectivity issues.

    · As shown in Figure 9, the VM Network switch is configured with two vNICs in active and active fashion to provide two active data paths which will result in aggregated bandwidth.

    For more details on the network configuration of the HyperFlex HX-Server node, using Cisco UCS network policies, templates and service profiles, refer to the Cisco UCS Design section in the Cisco HyperFlex 4.0 for Virtual Server Infrastructure with VMware ESXi CVD.

    The following sections provide more details on configuration and deployment best practices to deploy the SQL server databases on HyperFlex All-NVMe nodes.

    Storage Configuration for SQL Guest VMs

    Figure 10 illustrates the storage configuration recommendations for virtual machines running SQL server databases on HyperFlex All-NVMe nodes. Single LSI Logic virtual SCSI controller is used to host the Guest OS. Separate Paravirtual SCSI (PVSCSI) controllers are configured to host SQL server data and log files. For large scale and high performing SQL deployments, it is recommended to spread the SQL data files across two or more different PVSCSI controllers for better performance as shown in the figure. Additional performance guidelines are detailed in the Deployment Planning section.

    Deployment Planning

    It is crucial to follow and implement the configuration best practices and recommendations in order to achieve best performance from any underlying system. This section details the major design and configuration best practices that should be followed when deploying SQL server databases on All-NVMe HyperFlex systems.

    Datastore Recommendation

    The following recommendations can be followed while deploying the SQL server virtual machines on HyperFlex All-NVMe Systems.

    All the virtual machine’s virtual disks comprising guest Operating System, SQL data, and transaction log files can be placed on a single datastore exposed as NFS file share to the ESXi hosts. Deploying multiple SQL virtual machines using single datastore simplifies the management tasks.

    There is a maximum queue depth limit of 1024 for each NFS datastore per host, which is an optimum queue depth for most of the workloads. However, when consolidated IO requests from all the virtual machines deployed on the datastore exceeds 1024 (per host limit), then virtual machines might experience higher IO latencies. Symptoms of higher latencies can be identified using ESXTOP results.

    In such cases, creating new datastore and deploying SQL virtual machines on the new datastore will help. The general recommendation is to deploy low IO demanding SQL virtual machines in one single datastore until high guest latencies are noticed. Also, deploying a dedicated datastore for High IO demanding SQL VMs will allow dedicated queue and hence lesser latencies can be observed.

    The following figure shows that two different datastores are used for deploying various SQL guest virtual machines. “SQL-DS1” is used to deploy multiple small to medium SQL virtual machines while “SQL-DS2” is dedicatedly used for deploying single large SQL virtual machine with high IO demanding performance requirements.

    Figure 11 HyperFlex Datastores

    SQL Virtual Machine Configuration Recommendation

    While creating a VM for deploying SQL Server instance on a HyperFlex All-NVMe system, the following recommendations should be followed for performance and better administration.

    Cores per Socket

    NUMA is becoming increasingly more important to ensure workloads, like databases, allocate and consume memory within the same physical NUMA node that the vCPUs are scheduled. By changing appropriate Cores per Socket, make sure the virtual machine is configured such that both memory and cpu resources can be met by single physical NUMA. In case of wide virtual machines (demanding more resources than a single physical NUMA), resources can be allocated from two or more physical NUMA groups. For more details on virtual machine configurations best practices with varying resource requirements, please refer to this VMware KB article: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf

    Memory Reservation

    SQL server database transactions are usually CPU and memory intensive. In a heavy OLTP database systems, it is recommended to reserve all the memory assigned to the SQL virtual machines. This ensures that the assigned memory to the SQL VM is committed and will eliminate the possibility of ballooning and swapping the memory out by the ESXi hypervisor. Memory reservations will have little overhead on the ESXi system. For more information about memory overhead, see Understanding Memory Overhead : https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.resmgmt.doc%2FGUID-4954A03F-E1F4-46C7-A3E7-947D30269E34.html

    Figure 12 Memory Reservations for SQL Virtual Machine

    Paravirtual SCSI adapters for Large-Scale High IO Virtual Machines

    For virtual machines with high disk IO requirements, it is recommended to use Paravirtual SCSI (PVSCSI) adapters. PVSCSI controller is a virtualization aware, high-performance SCSI adapter that allows the lowest possible latency and highest throughput with the lowest CPU overhead. It also has higher queue depth limits compared to other legacy controllers. Legacy controllers (LSI Logic SAS, LSI Logic Parallel and so on) can cause bottleneck and impact database performance hence not recommended for IO intensive database applications such as SQL server databases.

    Queue Depth and SCSI Controller Recommendations

    Many times, queue depth settings of virtual disks are overlooked, which can impact performance particularly in high IO workloads. Systems such as Microsoft SQL Server databases tend to issue a lot of simultaneous IOs resulting in an insufficient VM driver queue depth settings (default setting is 64 for PVSCSI) to sustain the heavy IOs. It is recommended to change the default queue depth setting to a higher value (up to 254) as suggested in this VMware KB article : https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2053145

    For large-scale and high IO databases, it is recommended to use multiple virtual disks and have those virtual disks distributed across multiple SCSI controller adapters rather than assigning all of them to a single SCSI controller. This ensures that the guest VM will access multiple virtual SCSI controllers (four SCSI controllers maximum per guest VM), which in turn results in greater concurrency by utilizing the multiple queues available for the SCSI controllers.

    Virtual Machine Network Adapter type

    It is highly recommended to configure virtual machine network adapters with “VMXNET 3”. VMXNET 3 is the latest generation of para-virtualized NICs designed for performance. It offers several advanced features including multi-queue support, receive side scaling, IPv4/IPv6 offloads, and MSI/MSI-X interrupt delivery. While creating a new virtual machine, choose “VMXNET 3” as the adapter type as shown in Figure 13.

    Guest Power Scheme Settings

    HX Servers are optimally configured, at factory installation time, with appropriate BIOS policy settings at the host level and hence does not require any changes. Similarly, ESXi power management option (at vCenter level) is set to “High performance” at the time of HX installation by installer as shown in Figure 14.

    Inside the SQL server guest, it is recommended to set the power management option to “High Performance” for optimal database performance as shown in Figure 15. Starting with Windows 2019, the setting High performance is chosen by default.

    For other SQL server specific configuration recommendations on virtualized environments, see SQL Server best practices guide on VMware vSphere .

    Achieving Database High Availability

    Cisco HyperFlex storage systems incorporates efficient storage level availability techniques such as data mirroring (Replication Factor 2/3), native snapshot etc., to make sure continuous data access to the guest VMs hosted on the cluster. More details of the HX Data Platform Cluster Tolerated Failures are detailed here: https://www.cisco.com/c/en/us/td/docs/hyperconverged_systems/HyperFlex_HX_DataPlatformSoftware/AdminGuide/3_5/b_HyperFlexSystems_AdministrationGuide_3_5/b_HyperFlexSystems_AdministrationGuide_3_5_chapter_00.html#id_13113.

    This section describes the high availability techniques that are helpful to enhance the availability of the virtualized SQL server databases (apart from the storage level availability, which comes with HyperFlex solutions).

    The availability of the individual SQL Server database instance and virtual machines can be enhanced using the technologies listed below:

    · VMware HA: to achieve virtual machine availability

    · Microsoft SQL Server AlwaysOn: To achieve database level high availability

    Single VM / SQL Instance Level High Availability using VMware vSphere HA Feature

    Cisco HyperFlex solution leverages VMware clustering to provide availability to the hosted virtual machines. Since the exposed NFS storage is mounted on all the hosts in the cluster, they act as a shared storage environment to help migrate VMs between the hosts. This configuration helps migrate the VMs seamlessly in case of planned as well as unplanned outage. The vMotion vNIC need to be configured with Jumbo frames for faster guest VM migration.

    Database Level High Availability using SQL AlwaysOn Availability Group Feature

    HyperFlex architecture inherently uses NFS datastores. Microsoft SQL Server Failover Cluster Instance (FCI) needs shared storage which cannot be on NFS storage (unsupported by VMware ESXi). Hence FCI is not supported as high availability option, instead SQL Server AlwaysOn Availability Group feature can be used. Introduced in Microsoft SQL Server 2012, AlwaysOn Availability Groups maximizes the availability of a set of user databases for an enterprise. An availability group supports a failover environment for a discrete set of user databases, known as availability databases, that failover together. An availability group supports a set of read-write primary databases and one to eight sets of corresponding secondary databases. Optionally, secondary databases can be made available for read-only access and/or some backup operations. More information on this feature can be found at the Microsoft MSDN here: https://msdn.microsoft.com/en-us/library/hh510230.aspx .

    Microsoft SQL Server AlwaysOn Availability Groups take advantage of Windows Server Failover Clustering (WSFC) as a platform technology. WSFC uses a quorum-based approach to monitor the overall cluster health and maximize node-level fault tolerance. The AlwaysOn Availability Groups will get configured as WSFC cluster resources and the availability of the same will depend on the underlying WSFC quorum modes and voting configuration explained here: https://docs.microsoft.com/en-us/sql/sql-server/failover-clusters/windows/wsfc-quorum-modes-and-voting-configuration-sql-server .

    Using AlwaysOn Availability Groups with synchronous replication, supporting automatic failover capabilities, enterprises will be able to achieve seamless database availability across the database replicas configured. The following figure depicts the scenario where an AlwaysOn availability group is configured between the SQL server instances running on two separate HyperFlex Storage systems. To ensure that the involved databases provide guaranteed high performance and no data loss in the event of failure, proper planning need to be done to maintain a low latency replication network link between the clusters.

    Figure 16 Synchronous AlwaysOn Configuration Across HyperFlex All-NVMe Systems

    Although there are no definitive rules on the infrastructure used for hosting a secondary replica, the following are some of the guidelines if you plan to have a primary replica on the All-NVMe High Performing cluster:

    · In case of a synchronous replication (no data loss)

    - The replicas need to be hosted on similar hardware configurations to ensure that the database performance is not compromised while waiting for the acknowledgment from the replicas.

    - Ensure a high-speed, low latency network connection between the replicas.

    · In case of an asynchronous replication (may have data loss)

    - The performance of the primary replica does not depend on the secondary replica, so it can be hosted on low cost hardware solutions as well.

    - The amount to data loss depends on the network characteristics and the performance of the replicas.

    If you are willing to deploy AlwaysOn Availability Group within a single HyperFlex All-NVMe cluster, which involves more than 2 replicas, VMWare DRS anti-affinity rules must be used to ensure that each SQL VM replica is placed on different VMware ESXi hosts in order to reduce database downtime. For more details on configuring VMware anti-affinity rules, see: http://pubs.vmware.com/vsphere-60/index.jsp?topic=%2Fcom.vmware.vsphere.resmgmt.doc%2FGUID-7297C302-378F-4AF2-9BD6-6EDB1E0A850A.html.

    SQL Server Failover Cluster Instance (FCI), which leverages Windows Server Failover Cluster (WSFC), is a commonly used practice for providing High Availability to the SQL instances. A Clustered SQL instance requires the underlying storage to be shared among the participating clustered nodes. WSFC uses SCSI-3 reservations on the storage volumes such that the storage volumes can be online and owned by only one node at any given time. NFS based storage volumes are not certified for Windows Failover Cluster deploying SQL Server Failover Cluster instance using HyperFlex NFS based volumes is not recommended. For more details on the storage protocols that are supported and not supported for Failover Cluster, go to: https://kb.vmware.com/s/article/2147661


    4 Answers 4

    Your query is first-order-injection maybe-just-maybe-safe, depending on how CodeIgniter improved from the last time I used it.

    Let's be clear on this: no amount of sanitization will prevent you from SQL injection. The true "best fix" for PHP is parametrization, which allows you to take the variables out of the query. Failing that, however, proper sanitization can help. Last time I checked (three years ago), CodeIgniter used mysql_real_escape_string which, while a valid deterrent for most script kiddies, will not stop a seasoned hacker.

    The best thing to do is to read up on how CodeIgniter actually does the sanitization/parametrization. If it is just addslashes , avoid it like the plague. There are plenty of other ways to perform SQL injections, by the way - for instance, you can juke character encoding to pass quotes through on most addslashes / mysql_escape_string setups. There are plenty of tutorials on the matter.

    No, escaping quotes/double quotes doesn't guarantee, that your web application is not vulnerable. It also depends on SQL-queries you used. I have already seen a lot of examples with addslashes() and escape() -like functions where people do like this:

    This is of course vulnerable, because you don't need a single quote/double quote to perform SQL Injection. It's good to mention, that in same rare cases, addslashes could by bypassed [1].

    The most recommended solution to your SQL-Injection problem is forget about string concatenation while creating SQL statements and take a look at parametrized queries/prepared statements. Speaking of CodeIgniter, you should also be interested in Active Record class.

    If you are 100% sure, that user always has to type integer (like in your limit variable), then you can try casting:

    Using casting that way protects you from SQL-Injection ('cos there is no way to put non-integer value into your SQL query).

    Oh, and don't forget that other variables could be vulnerable. So make sure to check query , offset and any other variable, which user controls. It's also a good practice to check the behaviour of the web-application, when it gets unexpected data - in this case could be an Array instead of string: get_stuff?query[]=hello&limit[]=50&offset[]=0

    What most people don’t understand is that SQL string escape functions are only intended to be used for values that are designated for SQL string literals. This means, you can only put such values inside a SQL string literal like '…' or "…" .

    These string escape functions are used to avoid user provided input being interpreted as something other than a string value by mistake. This is done by replacing the delimiting quotes with special escape sequences so that they are not interpreted as the ending delimiter but as a literal quote.

    Now if your value is intended to be an integer value in SQL like LIMIT requires, escaping the value with string escape functions doesn’t work as it’s obviously not a string literal but an integer literal.

    Since it’s not a string, there are no delimiting quotes that needed to be bypassed by the attacker. And as the injection happens in the LIMIT clause, an injected 50 UNION SELECT … might be possible to successfully exploit the vulnerability.

    To fix this, make sure the values provided by the user are what you’re expecting. In case of an integer value, check if it’s actually an integer value, either actually a value of the type integer or a valid string representation of the integer value.

    Additionally, there are libraries that provide a simple interface to parameterized statements and/or prepared statements where the statement and the parameter values are handled separately and the values automatically get converted to the proper type.


    How about exporting from SQL Server back to GeoJSON?

    So querying data in the table is really easy for me now – but how about the scenario where I have data in SQL Server, and I want to export the results of a SELECT query to GeoJSON format?

    Fortunately we can use the JSON querying capabilities of SQL Server – I can suffix my query with ‘FOR JSON PATH’ to convert the results of a SELECT query from a tabular format to a JSON format, as shown below:

    But this doesn’t get me a result that’s quite right – it’s just a JSON formatted list of GeoJSON features. To make this a properly formatted GeoJSON featurecollection, I need to give this list a name – ‘features’, and specify the type as a ‘FeatureCollection’. Again this is reasonably straightforward with the built in JSON querying features of SQL Server.

    If you want to validate your GeoJSON, you can use a site like GeoJSONLint.com.


    You mention both SPD and source code? With code, it would be a custom web part or SharePoint app (aka add-in), which are both developer tasks.

    But this can be done with SPD with no code. Just set up an "external content type". Here's one tutorial, but there are many others:

    Though, the above has at least 4 problems.

    First, it's using SP 2010, which really shouldn't be a problem as it really hasn't changed much between versions.

    Second, it finished without telling you to click on the "create lists and form" button up in the ribbon. Click that button at the end of the process. That will create what looks and feels like a list. The user will be able to add, edit, sort, filter, etc., just like they're using a list, but the data is being read from and written to a database table.

    Third, when it has you set up the connection, the screen shot has the "connect with user's identity" option selected. Good luck getting that to work. More realistically, you'll need to configure a service account via the secure store service (which is accessed via central admin, so you may need to get IT involved).

    Fourth, after the external content type is created, no one will have permissions to use it. Permissions will need to be granted via central admin.


    AZ-304 Practice Questions

    You need to recommend a solution to identify the queries that take the longest to execute.

    What should you include in the recommendation?

    1.) Correlate Azure resource usage and performance data with app configuration and performance data

    2.) Visualize the relationships between application components

    3.) Track requests and exceptions to a specific line of code within the application

    4.) Analyze how many users return to the application and how often they select a particular dropdown value

    You need to design a monitoring solution for the web app. Which Azure monitoring services should you use for each?

    3.) a. Azure Application Insights

    The Hyper-V cluster contains 30 virtual machines that run Windows Server 2012 R2. Each virtual machine runs a different workload. The workloads have predictable consumption patterns.

    You plan to replace the virtual machines with Azure virtual machines that run Windows Server 2016. The virtual machines will be sized according to the consumption pattern of each workload.

    You need to recommend a solution to minimize the compute costs of the Azure virtual machines. Which two recommendations should you include in the solution?

    C. Activate Azure Hybrid Benefit for the Azure virtual machines.

    For customers with Software Assurance, Azure Hybrid Benefit for Windows Server allows you to use your on-premises Windows Server licenses and run
    Windows virtual machines on Azure at a reduced cost. You can use Azure Hybrid Benefit for Windows Server to deploy new virtual machines with Windows OS.

    D. Purchase Azure Reserved Virtual Machine Instances for the Azure virtual machines.

    The subscription contains the storage accounts:
    Storage1(storagev2) --> RG1 -->East US.
    Storage2(BlobStorage)-->RG2 --> West US

    You create the Azure SQL databases:
    SQLdb1-->RG1-->SQLsvr1-->STD pricing tier
    SQLdb2-->RG1-->SQLsvr1-->STD pricing tier
    SQLdb3-->RG2-->SQLsvr2-->Premium pricing tier

    1.) When you enable auditing for SQLdb1, can you store the audit info to sotrage1?

    2.) When you enable auditing for SQLdb2, can you store the audit info to storage2?

    Users report general issues with the data. You advise the company to implement live monitoring and use ad hoc queries on stored JSON data. You also advise the company to set up smart alerting to detect anomalies in the data.

    You need to recommend a solution to set up smart alerting.
    What should you recommend?

    Azure Monitor Logs is a feature of Azure Monitor that collects and organizes log and performance data from monitored resources. Analyze, Alert, Visualize, Insights, Retrieve, and Export.

    Use it to monitor your live applications. It will automatically detect performance anomalies, and includes powerful analytics tools to help you diagnose issues and to understand what users actually do with your app.

    Each department has a specific spending limit for its Azure resources.

    You need to ensure that when a department reaches its spending limit, the compute resources of the department shut down automatically.

    Which two features should you include in the solution?

    C. the spending limit of an Azure account

    The spending limit in Azure prevents spending over your credit amount. All new customers who sign up for an Azure free account or subscription types that include credits over multiple months have the spending limit turned on by default. The spending limit is equal to the amount of credit and it can't be changed.

    D. Cost Management budgets

    Turn on the spending limit after removing. This feature is available only when the spending limit has been removed indefinitely for subscription types that include credits over multiple months.

    storage1, storage account, storage in East US
    storage2, storage account, storageV2 in East US
    Workspace1, log analytics workspace in East US
    Workspace2, log analytics workspace in East US
    Hub1, Event hub in East US

    You create an Azure SQL database named DB1 that is hosted in the East US region.

    To DB1, you add a diagnostic setting named Settings1. Settings1 archives SQLInsights to storage1 and sends SQLInsights to Workspace1.

    1.) Can you add a new diagnostic setting to archive SQLInsights logs to storage2?

    2.) Can you add a new diagnostic setting that sends SQLInsights logs to Workspace2?

    1.) What is the amount of time an SQLInsights data will be stored in blob storage?

    2.) What is the maximum amount of time SQLInsights data can be stored in Azure Log Analytics?

    In the exhibit, the SQLInsights data is configured to be stored in Azure Log Analytics for 90 days. However, the question is asking for the "maximum" amount of time that the data can be stored which is 730 days.

    You plan to deploy a custom application to each subscription. The application will contain the following:
    ✑ A resource group
    ✑ An Azure web app
    ✑ Custom role assignments
    ✑ An Azure Cosmos DB account
    You need to use Azure Blueprints to deploy the application to each subscription.

    What is the minimum number of objects required to deploy the application?

    When creating a blueprint definition, you'll define where the blueprint is saved. Blueprints can be saved to a management group or subscription that you have Contributor access to. If the location is a management group, the blueprint is available to assign to any child subscription of that management group.

    Blueprint definitions: 1
    One definition as the you plan to deploy a custom application to each subscription.

    With Azure Blueprints, the relationship between the blueprint definition (what should be deployed) and the blueprint assignment (what was deployed) is preserved.

    What should you include in the recommendation?

    ✑ Ensure that all ExpressRoute resources are created in a resource group named RG1.
    ✑ Delegate the creation of the ExpressRoute resources to an Azure Active Directory (Azure AD) group named Networking.
    ✑ Use the principle of least privilege.

    1.) Ensure all ExpressRoute resources are created in RG1

    2.) Delegate the creation of the ExpressRoute resources to Networking

    2.) A custom RBAC role assignment at the level of RG1

    MFA Policy Configuration:
    Enable Policy set to off
    Grant
    Select the controls to be enforced
    Grant access selected.
    Require multi-factor authentication: yes
    Require device to be marked as compliant: no
    Require hybrid azure ad joined devices: yes
    Require approved client apps: no
    Require app protection policy: no
    For multiple controls: require one of the selected controls.

    What is the result of the policy?

    You need to recommend a solution to meet the following requirements:

    ✑ Prevent the IT staff that will perform the deployment from retrieving the secrets directly from Key Vault.
    ✑ Use the principle of least privilege.

    Which two actions should you recommend?

    A. Create a Key Vault access policy that allows all get key permissions, get secret permissions, and get certificate permissions.

    B. From Access policies in Key Vault, enable access to the Azure Resource Manager for template deployment.

    C. Create a Key Vault access policy that allows all list key permissions, list secret permissions, and list certificate permissions.

    D. Assign the IT staff a custom role that includes the Microsoft.KeyVault/Vaults/Deploy/Action permission.

    B. From Access policies in Key Vault, enable access to the Azure Resource Manager for template deployment.

    To access a key vault during template deployment, set enabledForTemplateDeployment on the key vault to true.

    D. Assign the IT staff a custom role that includes the Microsoft.KeyVault/Vaults/Deploy/Action permission.

    The user who deploys the template must have the Microsoft.KeyVault/vaults/deploy/action permission for the scope of the resource group and key vault.

    How many instances of Key Vault should you implement?

    The contents of your key vault are replicated within the region and to a secondary region at least 150 miles away but within the same geography. This maintains high durability of your keys and secrets. See the Azure paired regions document for details on specific region pairs.

    You need to recommend which certificates are required for the deployment.

    1.) Trusted Root Certification Authorities certificate store on each laptop

    2.) The users Personal store on each laptop

    Which certificates should be used for each

    You need to ensure the application can use secure credentials to access these services.

    Functionality
    1.) Azure Key vault
    2.)Azure SQL
    3.) CosmosDB

    Which authentication method should you recommend for each functionality?

    You need to recommend a solution to verify whether the Fabrikam developers still require permissions to Application1. The solution must meet the following requirements:
    ✑ To the manager of the developers, send a monthly email message that lists the access permissions to Application1.
    ✑ If the manager does not verify an access permission, automatically revoke that permission.
    ✑ Minimize development effort.

    What should you recommend?

    Customer requirements are: Users must authenticate by using a personal Microsoft account and multi-factor authentication

    Reporting requirements: Users must authenticate by using either Contoso credentials or a personal Microsoft account. You must be able to manage the accounts from the Azure AD.

    Which authentication strategy should you recommend for each application?

    Azure AD V2.0 endpoint -
    Microsoft identity platform is an evolution of the Azure Active Directory (Azure AD) developer platform. It allows developers to build applications that sign in all Microsoft identities and get tokens to call Microsoft APIs, such as Microsoft Graph, or APIs that developers have built. The Microsoft identity platform consists of: OAuth 2.0 and OpenID Connect standard-compliant authentication service that enables developers to authenticate any Microsoft identity, including: Work or school accounts (provisioned through Azure AD)Personal Microsoft accounts (such as Skype, Xbox, and Outlook.com)Social or local accounts (via Azure AD B2C)

    You must recommend a solution that lets employees sign in to all company resources by using a single account. The solution must implement an identity provider.

    You need to provide guidance on the different identity providers.
    How should you describe each identity provider?

    Azure AD Domain Services for hybrid organizationsOrganizations with a hybrid IT infrastructure consume a mix of cloud resources and on-premises resources. Such organizations synchronize identity information from their on-premises directory to their Azure AD tenant. As hybrid organizations look to migrate more of their on-premises applications to the cloud, especially legacy directory-aware applications, Azure AD Domain Services can be useful to them.

    2.) B. User management occurs on-premises. The on-premises domain controller authenticates employee credentials.

    1.) To perform real-time reporting using Microsoft Power BI, you must first:
    A. clear Send to Log Analytics
    B. clear SQLInsights
    C. select Archive to a storage account
    D. select Stream to an even hub

    You need to recommend a solution to meet the following requirements for the virtual machines that will run App1:

    ✑ Ensure that the virtual machines can authenticate to Azure Active Directory (Azure AD) to gain access to an Azure key vault, Azure Logic Apps instances, and an Azure SQL database.

    ✑ Avoid assigning new roles and permissions for Azure services when you deploy additional virtual machines.

    ✑ Avoid storing secrets and certificates on the virtual machines.

    ✑ Minimize administrative effort for managing identities.

    Which type of identity should you include in the recommendation?

    Managed identities for Azure resources is a feature of Azure Active Directory.

    User-assigned managed identity can be shared. The same user-assigned managed identity can be associated with more than one Azure resource.

    You need to design a solution to expose the microservices to the consumer apps. The solution must meet the following requirements:

    ✑ Ingress access to the microservices must be restricted to a single private IP address and protected by using mutual TLS authentication.
    ✑ The number of incoming microservice calls must be rate-limited.
    ✑ Costs must be minimized.

    What should you include in the solution?

    The API must meet the following requirements:

    ✑ Implement Azure Functions.
    ✑ Provide public read-only operations.
    ✑ Do not allow write operations.
    You need to recommend configuration options.

    What should you recommend?

    1.) Allowed authentication methods
    -all methods
    -GET only
    -GET and POST only
    -GET, POST, and OPTIONS only

    The option is Allow Anonymous requests.
    This option turns on authentication and authorization in App Service, but defers authorization decisions to your application code.

    For authenticated requests, App Service also passes along authentication information in the HTTP headers.

    Contoso is preparing to migrate all workloads to Azure. Contoso wants users to use single sign-on (SSO) when they access cloud-based services that integrate with Azure Active Directory (Azure AD).
    You need to identify any objects in Active Directory that will fail to synchronize to Azure AD due to formatting issues. The solution must minimize costs.

    What should you include in the solution?

    You need to ensure that the application is protected from SQL injection attempts and uses a layer-7 load balancer. The solution must minimize disruption to the code for the existing web application.

    What should you recommend for each?

    Azure Application Gateway provides an application delivery controller (ADC) as a service. It offers various layer 7 load-balancing capabilities for your applications.

    2.)Web Application Firewall (WAF)

    Application Gateway web application firewall (WAF) protects web applications from common vulnerabilities and exploits.

    This is done through rules that are defined based on the OWASP core rule sets 3.0 or 2.2.9.

    Ten users in the finance department of your company plan to access the blobs during the month of April.

    You need to recommend a solution to enable access to the blobs during the month of April only.

    App1 will be accessed from the internet by the users at your company. All the users have computers that run Windows 10 and are joined to Azure AD.

    You need to recommend a solution to ensure that the users can connect to App1 without being prompted for authentication and can access App1 only from company-owned computers.

    What should you recommend for each requirement?

    1.) The Users can connect to APP1 without being prompted for authentication:

    A. An Azure AD app registration
    B. An AD managed identity
    C. Azure AD Application Proxy

    2.) The Users can access APP1 only from company-owned computers:

    Azure active directory (AD) provides cloud based directory and identity management services.You can use azure AD to manage users of your application and authenticate access to your applications using azure active directory.
    You register your application with Azure active directory tenant.

    2.) A. A conditional access policy

    ✑ Use Azure Blueprints to control governance across all the subscriptions and resource groups.
    ✑ Ensure that Blueprints-based configurations are consistent across all the subscriptions and resource groups.
    ✑ Minimize the number of blueprint definitions and assignments.

    What should you include in the solution?

    1.) Level at which to define the blueprints:

    2.) Level at which to create the blueprint assignments:

    When creating a blueprint definition, you'll define where the blueprint is saved. Blueprints can be saved to a management group or subscription that you have
    Contributor access to. If the location is a management group, the blueprint is available to assign to any child subscription of that management group.

    2.) B. The root management group

    You need to recommend a solution to provide developers with the ability to provision Azure virtual machines. The solution must meet the following requirements:

    ✑ Only allow the creation of the virtual machines in specific regions.
    ✑ Only allow the creation of specific sizes of virtual machines.
    What should you include in the recommendation?

    The network contains an Active Directory domain named contoso.com that is synced to Azure Active Directory (Azure AD).
    All users connect to an Exchange Online.
    You need to recommend a solution to ensure that all the users use Azure Multi-Factor Authentication (MFA) to connect to Exchange Online from one of the offices.

    What should you include in the recommendation?

    Security: review membership of admin roles and require users to provide a justification for continued membership, Get alerts about changes to administrator assignments, See a history of admin activity including which changes admins made to azure resources

    Development: Enable the applications to access the Azure Key Vault and retrieve keys for use in code.

    Quality Assurance: Receive temporary admin access to create and configure additional web and API applications in the test environment.

    You need to recommend the appropriate Azure service for each department request.

    What should you recommend for each department?

    1.) Security
    2.) Development
    3.) Quality Assurance

    You plan to integrate Active Directory and Azure Active Directory (Azure AD) by using Azure AD Connect.
    You need to recommend a solution to ensure that group owners are emailed monthly about the group memberships they manage.

    What should you include in the recommendation?

    You need to recommend a solution to ensure that the applications can authenticate by using the same Azure Active Directory (Azure AD) identity.
    The solution must meet the following requirements:

    ✑ Ensure that the applications can authenticate only when running on the 10 virtual machines.
    ✑ Minimize administrative effort.

    What should you include in the recommendation?

    1.) To provision the Azure AD identity:
    A. Create a system-assigned Managed Identities for Azure resource
    B. Create a user-assigned Managed Identities for Azure resource
    C. Register each application in Azure AD

    The managed identities for Azure resources feature in Azure Active Directory (Azure AD) feature provides Azure services with an automatically managed identity in Azure AD. You can use the identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without any credentials in your code.

    A system-assigned managed identity is enabled directly on an Azure service instance. When the identity is enabled, Azure creates an identity for the instance in the Azure AD tenant that's trusted by the subscription of the instance. After the identity is created, the credentials are provisioned onto the instance.

    2.)C. An Azure Instance Metadata Service identity OAuth2 endpoint

    You create two Azure virtual machines named VM1 and VM2.
    You need to ensure that Admin1 and Admin2 are notified when more than five events are added to the security log of VM1 or VM2 during a period of 120 seconds.

    The solution must minimize administrative tasks.

    You discover several login attempts to the Azure portal from countries where administrative users do NOT work.

    You need to ensure that all login attempts to the Azure portal from those countries require Azure Multi-Factor Authentication (MFA).

    Solution: Create an Access Review for Group1.

    You discover several login attempts to the Azure portal from countries where administrative users do NOT work.

    You need to ensure that all login attempts to the Azure portal from those countries require Azure Multi-Factor Authentication (MFA).

    Solution: Implement Azure AD Identity Protection for Group1.

    You discover several login attempts to the Azure portal from countries where administrative users do NOT work.

    You need to ensure that all login attempts to the Azure portal from those countries require Azure Multi-Factor Authentication (MFA).

    Solution: You implement an access package.

    The instances host databases that have the following characteristics:

    ✑ The largest database is currently 3 TB. None of the databases will ever exceed 4 TB.

    ✑ Stored procedures are implemented by using CLR.
    You plan to move all the data from SQL Server to Azure.
    You need to recommend an Azure service to host the databases. The solution must meet the following requirements:

    ✑ Whenever possible, minimize management overhead for the migrated databases.

    ✑ Minimize the number of database changes required to facilitate the migration.

    ✑ Ensure that users can authenticate by using their Active Directory credentials.

    What should you include in the recommendation?

    App1--webapp--process customer orders
    Function1--function--check product availability at vendor1
    Function2--function--check product availability at vendor2
    Storage1--storage account--Stores order processing logs

    The order processing system will have the following transaction flow:

    ✑ A customer will place an order by using App1.
    ✑ When the order is received, App1 will generate a message to check for product availability at vendor 1 and vendor 2.
    ✑ An integration component will process the message, and then trigger either Function1 or Function2 depending on the type of order.
    ✑ Once a vendor confirms the product availability, a status message for App1 will be generated by Function1 or Function2.
    ✑ All the steps of the transaction will be logged to storage1.

    Which type of resource should you recommend for the integration component?

    A data factory can have one or more pipelines. A pipeline is a logical grouping of activities that together perform a task.

    The activities in a pipeline define actions to perform on your data.

    Data Factory has three groupings of activities: data movement activities, data transformation activities, and control activities.

    The on-premises network does not have hybrid connectivity to Azure by using Site-to-Site VPN or ExpressRoute.

    You want to migrate the packages to Azure Data Factory.

    You need to recommend a solution that facilitates the migration while minimizing changes to the existing packages. The solution must minimize costs.

    What should you recommend?

    1.) Store SSISDB catalog by using:
    A. Azure SQL database
    B. Azure Synapse Analytics
    C. SQL Server on an Azure virtual machine
    D. SQL Server on an on-premises computer

    You can't create the SSISDB Catalog database on Azure SQL Database at this time independently of creating the Azure-SSIS Integration Runtime in Azure Data
    Factory. The Azure-SSIS IR is the runtime environment that runs SSIS packages on Azure.

    2.) C. Azure-SQL Server Integration Services Integration Runtime and self-hosted Integration runtime.

    What Azure service should you recommend?

    Microsoft has engineered an extremely powerful solution that helps customers get their data to the Azure public cloud in a cost-effective, secure, and efficient manner with powerful Azure and machine learning at play. The solution is called Data Box.
    Data Box and is in general availability status. It is a rugged device that allows organizations to have 100 TB of capacity on which to copy their data and then send it to be transferred to Azure.

    You need to recommend a solution to encrypt the disks by using Azure Disk Encryption. The solution must provide the ability to encrypt operating system disks and data disks.

    What should you include in the recommendation?

    You need to transform the data by using mapping data flow.

    Which Azure service should you use?

    What should you deploy on VM1 to support the design?

    The integration runtime (IR) is the compute infrastructure that Azure Data Factory uses to provide data-integration capabilities across different network environments. For details about IR, see Integration runtime overview.

    You need to design a storage solution for the application.

    The solution must meet the following requirements:
    ✑ Operational costs must be minimized.
    ✑ All customers must have their own database.
    ✑ The customer databases will be in one of the following three Azure regions: East US, North Europe, or South Africa North.

    What is the minimum number of elastic pools and Azure SQL Database servers required?

    2.) Azure SQL Database Servers

    The log files are generated by user activity to Apache web servers. The log files are in a consistent format. Approximately 1 GB of logs are generated per day.

    Microsoft Power BI is used to display weekly reports of the user activity.

    You need to recommend a solution to minimize costs while maintaining the functionality of the architecture.

    What should you recommend?

    Migration of the SQL Server instances to Azure must:
    ✑ Support automatic patching and version updates to SQL Server.
    ✑ Provide automatic backup services.
    ✑ Allow for high-availability of the instances.
    ✑ Provide a native VNET with private IP addressing.
    ✑ Encrypt all data in transit.
    ✑ Be in a single-tenant environment with dedicated underlying infrastructure (compute, storage).

    You need to migrate the SQL Server instances to Azure.

    Which Azure service should you use?

    Data in the cool access tier can tolerate slightly lower availability, but still requires high durability, retrieval latency, and throughput characteristics similar to hot data. For cool data, a slightly lower availability service-level agreement (SLA) and higher access costs compared to hot data are acceptable trade-offs for lower storage costs.

    You need to recommend a caching policy for each disk. The policy must provide the best overall performance for the virtual machine while preserving integrity of the SQL data and logs.

    Which caching policy should you recommend for each disk?

    Each policy may be used once, more than once, or not at all.

    You need to recommend a database platform to host the databases.
    The solution must meet the following requirements:

    ✑ The compute resources allocated to the databases must scale dynamically.
    ✑ The solution must meet an SLA of 99.99% uptime.
    ✑ The solution must have reserved capacity.
    ✑ Compute charges must be minimized.

    What should you include in the recommendation?

    You need to recommend an Azure solution to host DB1 and DB2. The solution must meet the following requirements:

    ✑ Support server-side transactions across DB1 and DB2.
    ✑ Minimize administrative effort to update the solution.

    What should you recommend?

    The number of fault domains is set to 3. The number of update domains is set to 20.

    You need to identify how many App1 instances will remain available during a period of planned maintenance.

    How many App1 instances should you identify?

    You need to ensure that the archived data cannot be deleted for five years.

    The solution must prevent administrators from deleting the data.

    Solution: You create an Azure Blob storage container, and you configure a legal hold access policy.

    Use an Azure Blob storage container, but use a time-based retention policy instead of a legal hold.

    Note: Immutable storage for Azure Blob storage enables users to store business-critical data objects in a WORM (Write Once, Read Many) state. This state makes the data non-erasable and non-modifiable for a user-specified interval. For the duration of the retention interval, blobs can be created and read, but cannot be modified or deleted. Immutable storage is available for general-purpose v2 and Blob storage accounts in all Azure regions.

    You need to ensure that the archived data cannot be deleted for five years.

    The solution must prevent administrators from deleting the data.

    Solution: You create a file share and snapshots.

    You need to ensure that the archived data cannot be deleted for five years.

    The solution must prevent administrators from deleting the data.

    Solution: You create a file share, and you configure an access policy.

    Instead of a file share, an immutable Blob storage is required.

    You plan to migrate the virtual machines to an Azure subscription.
    You need to recommend a solution to replicate the disks of the virtual machines to Azure. The solution must ensure that the virtual machines remain available during the migration of the disks.

    Solution: You recommend implementing an Azure Storage account, and then running AzCopy.

    You plan to migrate the virtual machines to an Azure subscription.
    You need to recommend a solution to replicate the disks of the virtual machines to Azure. The solution must ensure that the virtual machines remain available during the migration of the disks.

    Solution: You recommend implementing an Azure Storage account that has a file service and a blob service, and then using the Data Migration Assistant.

    You plan to migrate the virtual machines to an Azure subscription.
    You need to recommend a solution to replicate the disks of the virtual machines to Azure. The solution must ensure that the virtual machines remain available during the migration of the disks.

    Solution: You recommend implementing a Recovery Services vault, and then using Azure Site Recovery.

    Site Recovery can replicate on-premises VMware VMs, Hyper-V VMs, physical servers (Windows and Linux), Azure Stack VMs to Azure.

    You identify the following types of infrequently accessed data:
    ✑ Telemetry data: Deleted after two years
    ✑ Promotional material: Deleted after 14 days
    ✑ Virtual machine audit data: Deleted after 200 days

    A colleague recommends using the archive access tier to store the data.

    Which statement accurately describes the recommendation?

    Which storage solution should you recommend?

    Enable geo-replication for container images.
    Best practice: Store your container images in Azure Container Registry and geo-replicate the registry to each AKS region.

    To deploy and run your applications in AKS, you need a way to store and pull the container images. Container Registry integrates with AKS, so it can securely store your container images or Helm charts. Container Registry supports multimaster geo-replication to automatically replicate your images to Azure regions around the world.
    Geo-replication is a feature of Premium SKU container registries.

    Note:
    When you use Container Registry geo-replication to pull images from the same region, the results are:
    Faster: You pull images from high-speed, low-latency network connections within the same Azure region.
    More reliable: If a region is unavailable, your AKS cluster pulls the images from an available container registry.

    What should you include in the recommendation?

    Incorrect Answer:
    Use Azure File Sync to centralize your organization's file shares in Azure Files, while keeping the flexibility, performance, and compatibility of an on-premises file server. Azure File Sync transforms Windows Server into a quick cache of your Azure file share.
    You need an Azure file share in the same region that you want to deploy Azure File Sync.

    The app must meet the following requirements:
    ✑ Website latency must be consistent for users in different geographical regions.
    ✑ Users must be able to authenticate by using Twitter and Facebook.
    ✑ Code must include only HTML, native JavaScript, and jQuery.
    ✑ Costs must be minimized.

    Which Azure service should you use to complete the architecture?

    With App Service you can authenticate your customers with Azure Active Directory, and integrate with Facebook, Twitter, Google.

    Which deployment option should you use?

    Standard geo-replication is available with Standard and Premium databases in the current Azure Management Portal and standard APIs.

    Incorrect:
    Not B: Business Critical service tier is designed for applications that require low-latency responses from the underlying SSD storage (1-2 ms in average), fast recovery if the underlying infrastructure fails, or need to off-load reports, analytics, and read-only queries to the free of charge readable secondary replica of the primary database.

    Note: Azure SQL Database and Azure SQL Managed Instance are both based on SQL Server database engine architecture that is adjusted for the cloud environment in order to ensure 99.99% availability even in the cases of infrastructure failures.

    You need to recommend which Azure services meet the business continuity and disaster recovery objectives. The solution must minimize costs.

    What should you recommend for each application?

    1.) Sales
    2.) Finance
    3.) Reporting

    Time-based retention policy support: Users can set policies to store data for a specified interval. When a time-based retention policy is set, blobs can be created and read, but not modified or deleted. After the retention period has expired, blobs can be deleted but not overwritten.

    You need to recommend a solution for delivering the files to the users. The solution must meet the following requirements:
    ✑ Ensure that the users receive files from the same region as the web app that they access.
    ✑ Ensure that the files only need to be uploaded once.
    ✑ Minimize costs.

    What should you include in the recommendation?

    What should you recommend?

    Solution: You deploy a virtual machine scale set that uses autoscaling.

    Solution: You deploy two Azure virtual machines to two Azure regions, and you deploy an Azure Application Gateway.

    Solution: You deploy two Azure virtual machines to two Azure regions, and create a Traffic Manager profile.

    You need to recommend a solution that meets the following requirements:
    ✑ Minimizes the use of the virtual machine processors to transfer data
    ✑ Minimizes network latency

    Which virtual machine size and feature should you use?

    1.) Virtual Machine Size
    -Compute optimized Standard_F8s
    -General purpose Standard_B8ms
    -High performance compute Standard_H16r
    -Memory optimized Standard_E16s_v3

    The solution must meet the following requirements:
    ✑ The front-end tier must be accessible by using a public IP address on port 80.
    ✑ The backend tier must be accessible by using port 8080 from the front-end tier only.
    ✑ Both containers must be able to access the same Azure file share.
    ✑ If a container fails, the application must restart automatically.
    ✑ Costs must be minimized.

    What should you recommend using to host the application?

    Which two actions should you recommend?

    You need to recommend a solution to remove AspNet-Version from the response of the published APIs.

    What should you include in the recommendation?

    You have a PowerShell script that identifies and deletes duplicate files in the storage account. Currently, the script is run manually after approval from the operations manager.

    You need to recommend a serverless solution that performs the following actions:
    ✑ Runs the script once an hour to identify whether duplicate files exist
    ✑ Sends an email notification to the operations manager requesting approval to delete the duplicate files
    ✑ Processes an email response from the operations manager specifying whether the deletion was approved
    ✑ Runs the script if the deletion was approved

    What should you include in the recommendation?

    You identify the following technical requirements:
    ✑ All Azure virtual machines must be placed on the same subnet named Subnet1.
    ✑ All the Azure virtual machines must be able to communicate with all on-premises servers.
    ✑ The servers must be able to communicate between the on-premises network and Azure by using a site-to-site VPN.

    You need to recommend a subnet design that meets the technical requirements.

    What should you include in the recommendation?

    What should you recommend?

    The solution must meet the following requirements:
    ✑ Requests to the logic apps from the developers must be limited to lower rates than the requests from the users at Contoso.
    ✑ The developers must be able to rely on their existing OAuth 2.0 provider to gain access to the logic apps.
    ✑ The solution must NOT require changes to the logic apps.
    ✑ The solution must NOT use Azure AD guest accounts.

    What should you include in the solution?

    API Management helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services.
    You can secure API Management using the OAuth 2.0 client credentials flow.

    What should you include in the solution?

    You need to recommend platform to host the app. The solution must meet the following requirements:
    ✑ Support autoscaling.
    ✑ Support continuous deployment from an Azure Container Registry.
    ✑ Provide built-in functionality to authenticate app users by using Azure Active Directory (Azure AD).

    Which platform should you include in the recommendation?

    Which two options should you recommend?

    B: Forced tunneling lets you redirect or "force" all Internet-bound traffic back to your on-premises location via a Site-to-Site VPN tunnel for inspection and auditing.

    This is a critical security requirement for most enterprise IT policies. Without forced tunneling, Internet-bound traffic from your VMs in Azure always traverses from
    Azure network infrastructure directly out to the Internet, without the option to allow you to inspect or audit the traffic.
    Forced tunneling in Azure is configured via virtual network user-defined routes.

    C: ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. With
    ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure, Office 365, and Dynamics 365.

    What should you include in the recommendation?

    Service Bus is a transactional message broker and ensures transactional integrity for all internal operations against its message stores. All transfers of messages inside of Service Bus, such as moving messages to a dead-letter queue or automatic forwarding of messages between entities, are transactional.

    What should developers use to interact with the queues?

    What should you include in the recommendation?

    You need to recommend a technology.

    Location--Resource
    Azure--Azure subscription, 20 Azure web apps
    On-Premises datacenter-- active directory domain, server running azure AD connect, Linux computer

    The on-premises Active Directory domain syncs to Azure Active Directory (Azure AD).

    Server1 runs an application named App1 that uses LDAP queries to verify user identities in the on-premises Active Directory domain.
    You plan to migrate Server1 to a virtual machine in Subscription1.
    A company security policy states that the virtual machines and services deployed to Subscription1 must be prevented from accessing the on-premises network.

    You need to recommend a solution to ensure that App1 continues to function after the migration. The solution must meet the security policy.

    What should you include in the recommendation?

    1.) Store content close to end users:

    2.) Store content close to the application:

    A content delivery network (CDN) is a distributed network of servers that can efficiently deliver web content to users. CDNs store cached content on edge servers in point-of-presence (POP) locations that are close to end users, to minimize latency.
    Azure Content Delivery Network (CDN) offers developers a global solution for rapidly delivering high-bandwidth content to users by caching their content at strategically placed physical nodes across the world. Azure CDN can also accelerate dynamic content, which cannot be cached, by leveraging various network optimizations using CDN POPs. For example, route optimization to bypass Border Gateway Protocol (BGP).

    What should you include in the recommendation?

    You identify the storage priorities for various data types:

    Operating system-->Speed and availability
    Database and logs --> Speed and availability
    Backups --> Lowest cost

    Which storage type should you recommend for each data type?

    1.) Operating system
    2.) Database and logs
    3.) Backups

    You need to recommend a solution to meet the regulatory requirement.

    Solution: You recommend using the Regulatory compliance dashboard in Azure Security Center.

    The Regulatory compliance dashboard in Azure Security Center is not used for regional compliance.

    Note: Instead Azure Resource Policy Definitions can be used which can be applied to a specific Resource Group with the App Service instances.

    Active Directory Environment
    The network contains two Active Directory forests named corp.fabrikam.com and rd.fabrikam.com.

    There are no trust relationships between the forests.
    Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication.
    Rd.fabrikam.com is used by the research and development (R&D) department only.

    Network Infrastructure
    Each office contains at least one domain controller from the corp.fabrikam.com domain. The main office contains all the domain controllers for the rd.fabrikam.com forest.
    All the offices have a high-speed connection to the Internet.
    An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V.
    The IT department currently uses a separate Hyper-V environment to test updates to WebApp1.
    Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance.

    Problem Statements
    The use of WebApp1 is unpredictable. At peak times, users often report delays. At other times, many resources for WebApp1 are underutilized.

    Planned Changes
    Fabrikam plans to move most of its production workloads to Azure during the next few years.
    As one of its first projects, the company plans to establish a hybrid identity model, facilitating an upcoming Microsoft Office 365 deployment.
    All R&D operations will remain on-premises.
    Fabrikam plans to migrate the production and test instances of WebApp1 to Azure and to use the S1 plan.

    Technical Requirements
    Fabrikam identifies the following technical requirements:
    -Web site content must be easily updated from a single point.
    -User input must be minimized when provisioning new web app instances.
    -Whenever possible, existing on-premises licenses must be used to reduce cost.
    -Users must always authenticate by using their corp.fabrikam.com UPN identity.
    -Any new deployments to Azure must be redundant in case an Azure region fails.
    -Whenever possible, solutions must be deployed to Azure by using the Standard pricing tier of Azure App Service.
    -An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services.
    -Directory synchronization between Azure Active Directory (Azure AD) and corp.fabrikam.com must not be affected by a link failure between Azure and the on- premises network.

    Database Requirements
    Fabrikam identifies the following database requirements:
    Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings.
    To avoid disrupting customer access, database downtime must be minimized when databases are migrated.
    Database backups must be retained for a minimum of seven years to meet compliance requirements.

    Security Requirements
    Fabrikam identifies the following security requirements:
    Company information including policies, templates, and data must be inaccessible to anyone outside the company.
    Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an Internet link fails.
    Administrators must be able authenticate to the Azure portal by using their corp.fabrikam.com credentials.
    All administrative access to the Azure portal must be secured by using multi-factor authentication.
    The testing of WebApp1 updates must not be visible to anyone outside the company.

    Question
    What should you include in the identity management strategy to support the planned changes?

    Active Directory Environment
    The network contains two Active Directory forests named corp.fabrikam.com and rd.fabrikam.com.

    There are no trust relationships between the forests.
    Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication.
    Rd.fabrikam.com is used by the research and development (R&D) department only.

    Network Infrastructure
    Each office contains at least one domain controller from the corp.fabrikam.com domain. The main office contains all the domain controllers for the rd.fabrikam.com forest.
    All the offices have a high-speed connection to the Internet.
    An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V.
    The IT department currently uses a separate Hyper-V environment to test updates to WebApp1.
    Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance.

    Problem Statements
    The use of WebApp1 is unpredictable. At peak times, users often report delays. At other times, many resources for WebApp1 are underutilized.

    Planned Changes
    Fabrikam plans to move most of its production workloads to Azure during the next few years.
    As one of its first projects, the company plans to establish a hybrid identity model, facilitating an upcoming Microsoft Office 365 deployment.
    All R&D operations will remain on-premises.
    Fabrikam plans to migrate the production and test instances of WebApp1 to Azure and to use the S1 plan.

    Technical Requirements
    Fabrikam identifies the following technical requirements:
    -Web site content must be easily updated from a single point.
    -User input must be minimized when provisioning new web app instances.
    -Whenever possible, existing on-premises licenses must be used to reduce cost.
    -Users must always authenticate by using their corp.fabrikam.com UPN identity.
    -Any new deployments to Azure must be redundant in case an Azure region fails.
    -Whenever possible, solutions must be deployed to Azure by using the Standard pricing tier of Azure App Service.
    -An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services.
    -Directory synchronization between Azure Active Directory (Azure AD) and corp.fabrikam.com must not be affected by a link failure between Azure and the on- premises network.

    Database Requirements
    Fabrikam identifies the following database requirements:
    Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings.
    To avoid disrupting customer access, database downtime must be minimized when databases are migrated.
    Database backups must be retained for a minimum of seven years to meet compliance requirements.

    Security Requirements
    Fabrikam identifies the following security requirements:
    Company information including policies, templates, and data must be inaccessible to anyone outside the company.
    Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an Internet link fails.
    Administrators must be able authenticate to the Azure portal by using their corp.fabrikam.com credentials.
    All administrative access to the Azure portal must be secured by using multi-factor authentication.
    The testing of WebApp1 updates must not be visible to anyone outside the company.

    To meet the authentication requirements of Fabrikam, what should you include in the solution?

    -Minimum number of Azure AD tenants:
    term-101
    -Minimum number of custom domains to add:

    The network contains two Active Directory forests named corp.fabrikam.com and rd.fabrikam.com. There are no trust relationships between the forests.

    -Minimum number of custom domains to add: 1

    -Minimum number of conditional access policies to create: 1

    Scenario:
    ✑ Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an Internet link fails.
    ✑ Administrators must be able authenticate to the Azure portal by using their corp.fabrikam.com credentials.
    ✑ All administrative access to the Azure portal must be secured by using multi-factor authentication.

    Active Directory Environment
    The network contains two Active Directory forests named corp.fabrikam.com and rd.fabrikam.com.

    There are no trust relationships between the forests.
    Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication.
    Rd.fabrikam.com is used by the research and development (R&D) department only.

    Network Infrastructure
    Each office contains at least one domain controller from the corp.fabrikam.com domain. The main office contains all the domain controllers for the rd.fabrikam.com forest.
    All the offices have a high-speed connection to the Internet.
    An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V.
    The IT department currently uses a separate Hyper-V environment to test updates to WebApp1.
    Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance.

    Problem Statements
    The use of WebApp1 is unpredictable. At peak times, users often report delays. At other times, many resources for WebApp1 are underutilized.

    Planned Changes
    Fabrikam plans to move most of its production workloads to Azure during the next few years.
    As one of its first projects, the company plans to establish a hybrid identity model, facilitating an upcoming Microsoft Office 365 deployment.
    All R&D operations will remain on-premises.
    Fabrikam plans to migrate the production and test instances of WebApp1 to Azure and to use the S1 plan.

    Technical Requirements
    Fabrikam identifies the following technical requirements:
    -Web site content must be easily updated from a single point.
    -User input must be minimized when provisioning new web app instances.
    -Whenever possible, existing on-premises licenses must be used to reduce cost.
    -Users must always authenticate by using their corp.fabrikam.com UPN identity.
    -Any new deployments to Azure must be redundant in case an Azure region fails.
    -Whenever possible, solutions must be deployed to Azure by using the Standard pricing tier of Azure App Service.
    -An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services.
    -Directory synchronization between Azure Active Directory (Azure AD) and corp.fabrikam.com must not be affected by a link failure between Azure and the on- premises network.

    Database Requirements
    Fabrikam identifies the following database requirements:
    Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings.
    To avoid disrupting customer access, database downtime must be minimized when databases are migrated.
    Database backups must be retained for a minimum of seven years to meet compliance requirements.

    Security Requirements
    Fabrikam identifies the following security requirements:
    Company information including policies, templates, and data must be inaccessible to anyone outside the company.
    Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an Internet link fails.
    Administrators must be able authenticate to the Azure portal by using their corp.fabrikam.com credentials.
    All administrative access to the Azure portal must be secured by using multi-factor authentication.
    The testing of WebApp1 updates must not be visible to anyone outside the company.

    Question
    You need to recommend a notification solution for the IT Support distribution group.

    What should you include in the recommendation?

    Active Directory Environment
    The network contains two Active Directory forests named corp.fabrikam.com and rd.fabrikam.com.

    There are no trust relationships between the forests.
    Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication.
    Rd.fabrikam.com is used by the research and development (R&D) department only.

    Network Infrastructure
    Each office contains at least one domain controller from the corp.fabrikam.com domain. The main office contains all the domain controllers for the rd.fabrikam.com forest.
    All the offices have a high-speed connection to the Internet.
    An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V.
    The IT department currently uses a separate Hyper-V environment to test updates to WebApp1.
    Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance.

    Problem Statements
    The use of WebApp1 is unpredictable. At peak times, users often report delays. At other times, many resources for WebApp1 are underutilized.

    Planned Changes
    Fabrikam plans to move most of its production workloads to Azure during the next few years.
    As one of its first projects, the company plans to establish a hybrid identity model, facilitating an upcoming Microsoft Office 365 deployment.
    All R&D operations will remain on-premises.
    Fabrikam plans to migrate the production and test instances of WebApp1 to Azure and to use the S1 plan.

    Technical Requirements
    Fabrikam identifies the following technical requirements:
    -Web site content must be easily updated from a single point.
    -User input must be minimized when provisioning new web app instances.
    -Whenever possible, existing on-premises licenses must be used to reduce cost.
    -Users must always authenticate by using their corp.fabrikam.com UPN identity.
    -Any new deployments to Azure must be redundant in case an Azure region fails.
    -Whenever possible, solutions must be deployed to Azure by using the Standard pricing tier of Azure App Service.
    -An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services.
    -Directory synchronization between Azure Active Directory (Azure AD) and corp.fabrikam.com must not be affected by a link failure between Azure and the on- premises network.

    Database Requirements
    Fabrikam identifies the following database requirements:
    Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings.
    To avoid disrupting customer access, database downtime must be minimized when databases are migrated.
    Database backups must be retained for a minimum of seven years to meet compliance requirements.

    Security Requirements
    Fabrikam identifies the following security requirements:
    Company information including policies, templates, and data must be inaccessible to anyone outside the company.
    Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an Internet link fails.
    Administrators must be able authenticate to the Azure portal by using their corp.fabrikam.com credentials.
    All administrative access to the Azure portal must be secured by using multi-factor authentication.
    The testing of WebApp1 updates must not be visible to anyone outside the company.

    You need to recommend a strategy for migrating the database content of WebApp1 to Azure.

    What should you include in the recommendation?

    Before you upload a Windows virtual machine (VM) from on-premises to Azure, you must prepare the virtual hard disk (VHD or VHDX).

    Active Directory Environment
    The network contains two Active Directory forests named corp.fabrikam.com and rd.fabrikam.com.

    There are no trust relationships between the forests.
    Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication.
    Rd.fabrikam.com is used by the research and development (R&D) department only.

    Network Infrastructure
    Each office contains at least one domain controller from the corp.fabrikam.com domain. The main office contains all the domain controllers for the rd.fabrikam.com forest.
    All the offices have a high-speed connection to the Internet.
    An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V.
    The IT department currently uses a separate Hyper-V environment to test updates to WebApp1.
    Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance.

    Problem Statements
    The use of WebApp1 is unpredictable. At peak times, users often report delays. At other times, many resources for WebApp1 are underutilized.

    Planned Changes
    Fabrikam plans to move most of its production workloads to Azure during the next few years.
    As one of its first projects, the company plans to establish a hybrid identity model, facilitating an upcoming Microsoft Office 365 deployment.
    All R&D operations will remain on-premises.
    Fabrikam plans to migrate the production and test instances of WebApp1 to Azure and to use the S1 plan.

    Technical Requirements
    Fabrikam identifies the following technical requirements:
    -Web site content must be easily updated from a single point.
    -User input must be minimized when provisioning new web app instances.
    -Whenever possible, existing on-premises licenses must be used to reduce cost.
    -Users must always authenticate by using their corp.fabrikam.com UPN identity.
    -Any new deployments to Azure must be redundant in case an Azure region fails.
    -Whenever possible, solutions must be deployed to Azure by using the Standard pricing tier of Azure App Service.
    -An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services.
    -Directory synchronization between Azure Active Directory (Azure AD) and corp.fabrikam.com must not be affected by a link failure between Azure and the on- premises network.

    Database Requirements
    Fabrikam identifies the following database requirements:
    Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings.
    To avoid disrupting customer access, database downtime must be minimized when databases are migrated.
    Database backups must be retained for a minimum of seven years to meet compliance requirements.

    Security Requirements
    Fabrikam identifies the following security requirements:
    Company information including policies, templates, and data must be inaccessible to anyone outside the company.
    Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an Internet link fails.
    Administrators must be able authenticate to the Azure portal by using their corp.fabrikam.com credentials.
    All administrative access to the Azure portal must be secured by using multi-factor authentication.
    The testing of WebApp1 updates must not be visible to anyone outside the company.

    You need to recommend a solution to meet the database retention requirement.

    What should you recommend?

    Contoso, Ltd, is a US-based financial services company that has a main office in New York and a branch office in San Francisco.

    Existing Environment
    Payment Processing System

    Contoso hosts a business-critical payment processing system in its New York data center. The system has three tiers: a front-end web app, a middle-tier web API, and a back-end data store implemented as a Microsoft SQL Server 2014 database. All servers run Windows Server 2012 R2.

    The front-end and middle-tier components are hosted by using Microsoft Internet Information Services (IIS). The application code is written in C# and ASP.NET.
    The middle-tier API uses the Entity Framework to communicate to the SQL Server database. Maintenance of the database is performed by using SQL Server Agent jobs.
    The database is currently 2 TB and is not expected to grow beyond 3 TB.

    The payment processing system has the following compliance-related requirements:
    -Encrypt data in transit and at rest. Only the front-end and middle-tier components must be able to access the encryption keys that protect the data store.
    -Keep backups of the data in two separate physical locations that are at least 200 miles apart and can be restored for up to seven years.
    -Support blocking inbound and outbound traffic based on the source IP address, the destination IP address, and the port number.
    -Collect Windows security logs from all the middle-tier servers and retain the logs for a period of seven years.
    -Inspect inbound and outbound traffic from the front-end tier by using highly available network appliances.
    -Only allow all access to all the tiers from the internal network of Contoso.

    Tape backups are configured by using an on-premises deployment of Microsoft System Center Data Protection Manager (DPM), and then shipped offsite for long term storage.

    Historical Transaction Query System
    Contoso recently migrated a business-critical workload to Azure. The workload contains a .NET web service for querying the historical transaction data residing in Azure Table Storage. The .NET web service is accessible from a client app that was developed in-house and runs on the client computers in the New York office.
    The data in the table storage is 50 GB and is not expected to increase.

    Current Issues
    The Contoso IT team discovers poor performance of the historical transaction query system, as the queries frequently cause table scans.

    Requirements
    Planned Changes
    Contoso plans to implement the following changes:
    -Migrate the payment processing system to Azure.
    -Migrate the historical transaction data to Azure Cosmos DB to address the performance issues.

    Migration Requirements
    Contoso identifies the following general migration requirements:
    -Infrastructure services must remain available if a region or a data center fails.
    -Failover must occur without any administrative intervention.
    -Whenever possible, Azure managed services must be used to minimize management overhead.
    -Whenever possible, costs must be minimized.

    Contoso identifies the following requirements for the payment processing system:

    -If a data center fails, ensure that the payment processing system remains available without any administrative intervention.
    -The middle-tier and the web front end must continue to operate without any additional configurations.
    -Ensure that the number of compute nodes of the front-end and the middle tiers of the payment processing system can increase or decrease automatically based on CPU utilization.
    -Ensure that each tier of the payment processing system is subject to a Service Level Agreement (SLA) of 99.99 percent availability.
    -Minimize the effort required to modify the middle-tier API and the back-end tier of the payment processing system.
    -Payment processing system must be able to use grouping and joining tables on encrypted columns.
    -Generate alerts when unauthorized login attempts occur on the middle-tier virtual machines.
    -Ensure that the payment processing system preserves its current compliance status.
    -Host the middle tier of the payment processing system on a virtual machine

    Contoso identifies the following requirements for the historical transaction query system:

    -Minimize the use of on-premises infrastructure services.
    -Minimize the effort required to modify the .NET web service querying Azure Cosmos DB.
    -Minimize the frequency of table scans.
    -If a region fails, ensure that the historical transaction query system remains available without any administrative intervention.

    Information Security Requirements
    The IT security team wants to ensure that identity management is performed by using Active Directory. Password hashes must be stored on-premises only.

    Access to all business-critical systems must rely on Active Directory credentials. Any suspicious authentication attempts must trigger a multi-factor authentication prompt automatically.

    Question
    You need to recommend a solution for protecting the content of the payment processing system.

    What should you include in the recommendation?

    Contoso, Ltd, is a US-based financial services company that has a main office in New York and a branch office in San Francisco.

    Existing Environment
    Payment Processing System

    Contoso hosts a business-critical payment processing system in its New York data center. The system has three tiers: a front-end web app, a middle-tier web API, and a back-end data store implemented as a Microsoft SQL Server 2014 database. All servers run Windows Server 2012 R2.

    The front-end and middle-tier components are hosted by using Microsoft Internet Information Services (IIS). The application code is written in C# and ASP.NET.
    The middle-tier API uses the Entity Framework to communicate to the SQL Server database. Maintenance of the database is performed by using SQL Server Agent jobs.
    The database is currently 2 TB and is not expected to grow beyond 3 TB.

    The payment processing system has the following compliance-related requirements:
    -Encrypt data in transit and at rest. Only the front-end and middle-tier components must be able to access the encryption keys that protect the data store.
    -Keep backups of the data in two separate physical locations that are at least 200 miles apart and can be restored for up to seven years.
    -Support blocking inbound and outbound traffic based on the source IP address, the destination IP address, and the port number.
    -Collect Windows security logs from all the middle-tier servers and retain the logs for a period of seven years.
    -Inspect inbound and outbound traffic from the front-end tier by using highly available network appliances.
    -Only allow all access to all the tiers from the internal network of Contoso.

    Tape backups are configured by using an on-premises deployment of Microsoft System Center Data Protection Manager (DPM), and then shipped offsite for long term storage.

    Historical Transaction Query System
    Contoso recently migrated a business-critical workload to Azure. The workload contains a .NET web service for querying the historical transaction data residing in Azure Table Storage. The .NET web service is accessible from a client app that was developed in-house and runs on the client computers in the New York office.
    The data in the table storage is 50 GB and is not expected to increase.

    Current Issues
    The Contoso IT team discovers poor performance of the historical transaction query system, as the queries frequently cause table scans.

    Requirements
    Planned Changes
    Contoso plans to implement the following changes:
    -Migrate the payment processing system to Azure.
    -Migrate the historical transaction data to Azure Cosmos DB to address the performance issues.

    Migration Requirements
    Contoso identifies the following general migration requirements:
    -Infrastructure services must remain available if a region or a data center fails.
    -Failover must occur without any administrative intervention.
    -Whenever possible, Azure managed services must be used to minimize management overhead.
    -Whenever possible, costs must be minimized.

    Contoso identifies the following requirements for the payment processing system:

    -If a data center fails, ensure that the payment processing system remains available without any administrative intervention.
    -The middle-tier and the web front end must continue to operate without any additional configurations.
    -Ensure that the number of compute nodes of the front-end and the middle tiers of the payment processing system can increase or decrease automatically based on CPU utilization.
    -Ensure that each tier of the payment processing system is subject to a Service Level Agreement (SLA) of 99.99 percent availability.
    -Minimize the effort required to modify the middle-tier API and the back-end tier of the payment processing system.
    -Payment processing system must be able to use grouping and joining tables on encrypted columns.
    -Generate alerts when unauthorized login attempts occur on the middle-tier virtual machines.
    -Ensure that the payment processing system preserves its current compliance status.
    -Host the middle tier of the payment processing system on a virtual machine

    Contoso identifies the following requirements for the historical transaction query system:

    -Minimize the use of on-premises infrastructure services.
    -Minimize the effort required to modify the .NET web service querying Azure Cosmos DB.
    -Minimize the frequency of table scans.
    -If a region fails, ensure that the historical transaction query system remains available without any administrative intervention.

    Information Security Requirements
    The IT security team wants to ensure that identity management is performed by using Active Directory. Password hashes must be stored on-premises only.

    Access to all business-critical systems must rely on Active Directory credentials. Any suspicious authentication attempts must trigger a multi-factor authentication prompt automatically.

    Question
    You need to recommend a solution for the data store of the historical transaction query system.

    What should you include in the recommendation?

    1.) Sizing Requirements:
    -A table that has unlimited capacity
    -A table that has a fixed capacity
    -Multiple tables that have unlimited capacity
    -Multiple tables that have fixed capacity

    Contoso, Ltd, is a US-based financial services company that has a main office in New York and a branch office in San Francisco.

    Existing Environment
    Payment Processing System

    Contoso hosts a business-critical payment processing system in its New York data center. The system has three tiers: a front-end web app, a middle-tier web API, and a back-end data store implemented as a Microsoft SQL Server 2014 database. All servers run Windows Server 2012 R2.

    The front-end and middle-tier components are hosted by using Microsoft Internet Information Services (IIS). The application code is written in C# and ASP.NET.
    The middle-tier API uses the Entity Framework to communicate to the SQL Server database. Maintenance of the database is performed by using SQL Server Agent jobs.
    The database is currently 2 TB and is not expected to grow beyond 3 TB.

    The payment processing system has the following compliance-related requirements:
    -Encrypt data in transit and at rest. Only the front-end and middle-tier components must be able to access the encryption keys that protect the data store.
    -Keep backups of the data in two separate physical locations that are at least 200 miles apart and can be restored for up to seven years.
    -Support blocking inbound and outbound traffic based on the source IP address, the destination IP address, and the port number.
    -Collect Windows security logs from all the middle-tier servers and retain the logs for a period of seven years.
    -Inspect inbound and outbound traffic from the front-end tier by using highly available network appliances.
    -Only allow all access to all the tiers from the internal network of Contoso.

    Tape backups are configured by using an on-premises deployment of Microsoft System Center Data Protection Manager (DPM), and then shipped offsite for long term storage.

    Historical Transaction Query System
    Contoso recently migrated a business-critical workload to Azure. The workload contains a .NET web service for querying the historical transaction data residing in Azure Table Storage. The .NET web service is accessible from a client app that was developed in-house and runs on the client computers in the New York office.
    The data in the table storage is 50 GB and is not expected to increase.

    Current Issues
    The Contoso IT team discovers poor performance of the historical transaction query system, as the queries frequently cause table scans.

    Requirements
    Planned Changes
    Contoso plans to implement the following changes:
    -Migrate the payment processing system to Azure.
    -Migrate the historical transaction data to Azure Cosmos DB to address the performance issues.

    Migration Requirements
    Contoso identifies the following general migration requirements:
    -Infrastructure services must remain available if a region or a data center fails.
    -Failover must occur without any administrative intervention.
    -Whenever possible, Azure managed services must be used to minimize management overhead.
    -Whenever possible, costs must be minimized.

    Contoso identifies the following requirements for the payment processing system:

    -If a data center fails, ensure that the payment processing system remains available without any administrative intervention.
    -The middle-tier and the web front end must continue to operate without any additional configurations.
    -Ensure that the number of compute nodes of the front-end and the middle tiers of the payment processing system can increase or decrease automatically based on CPU utilization.
    -Ensure that each tier of the payment processing system is subject to a Service Level Agreement (SLA) of 99.99 percent availability.
    -Minimize the effort required to modify the middle-tier API and the back-end tier of the payment processing system.
    -Payment processing system must be able to use grouping and joining tables on encrypted columns.
    -Generate alerts when unauthorized login attempts occur on the middle-tier virtual machines.
    -Ensure that the payment processing system preserves its current compliance status.
    -Host the middle tier of the payment processing system on a virtual machine

    Contoso identifies the following requirements for the historical transaction query system:

    -Minimize the use of on-premises infrastructure services.
    -Minimize the effort required to modify the .NET web service querying Azure Cosmos DB.
    -Minimize the frequency of table scans.
    -If a region fails, ensure that the historical transaction query system remains available without any administrative intervention.

    Information Security Requirements
    The IT security team wants to ensure that identity management is performed by using Active Directory. Password hashes must be stored on-premises only.

    Access to all business-critical systems must rely on Active Directory credentials. Any suspicious authentication attempts must trigger a multi-factor authentication prompt automatically.

    You need to recommend a backup solution for the data store of the payment processing system.

    What should you include in the recommendation?

    Contoso, Ltd, is a US-based financial services company that has a main office in New York and a branch office in San Francisco.

    Existing Environment
    Payment Processing System

    Contoso hosts a business-critical payment processing system in its New York data center. The system has three tiers: a front-end web app, a middle-tier web API, and a back-end data store implemented as a Microsoft SQL Server 2014 database. All servers run Windows Server 2012 R2.

    The front-end and middle-tier components are hosted by using Microsoft Internet Information Services (IIS). The application code is written in C# and ASP.NET.
    The middle-tier API uses the Entity Framework to communicate to the SQL Server database. Maintenance of the database is performed by using SQL Server Agent jobs.
    The database is currently 2 TB and is not expected to grow beyond 3 TB.

    The payment processing system has the following compliance-related requirements:
    -Encrypt data in transit and at rest. Only the front-end and middle-tier components must be able to access the encryption keys that protect the data store.
    -Keep backups of the data in two separate physical locations that are at least 200 miles apart and can be restored for up to seven years.
    -Support blocking inbound and outbound traffic based on the source IP address, the destination IP address, and the port number.
    -Collect Windows security logs from all the middle-tier servers and retain the logs for a period of seven years.
    -Inspect inbound and outbound traffic from the front-end tier by using highly available network appliances.
    -Only allow all access to all the tiers from the internal network of Contoso.

    Tape backups are configured by using an on-premises deployment of Microsoft System Center Data Protection Manager (DPM), and then shipped offsite for long term storage.

    Historical Transaction Query System
    Contoso recently migrated a business-critical workload to Azure. The workload contains a .NET web service for querying the historical transaction data residing in Azure Table Storage. The .NET web service is accessible from a client app that was developed in-house and runs on the client computers in the New York office.
    The data in the table storage is 50 GB and is not expected to increase.

    Current Issues
    The Contoso IT team discovers poor performance of the historical transaction query system, as the queries frequently cause table scans.

    Requirements
    Planned Changes
    Contoso plans to implement the following changes:
    -Migrate the payment processing system to Azure.
    -Migrate the historical transaction data to Azure Cosmos DB to address the performance issues.

    Migration Requirements
    Contoso identifies the following general migration requirements:
    -Infrastructure services must remain available if a region or a data center fails.
    -Failover must occur without any administrative intervention.
    -Whenever possible, Azure managed services must be used to minimize management overhead.
    -Whenever possible, costs must be minimized.

    Contoso identifies the following requirements for the payment processing system:

    -If a data center fails, ensure that the payment processing system remains available without any administrative intervention.
    -The middle-tier and the web front end must continue to operate without any additional configurations.
    -Ensure that the number of compute nodes of the front-end and the middle tiers of the payment processing system can increase or decrease automatically based on CPU utilization.
    -Ensure that each tier of the payment processing system is subject to a Service Level Agreement (SLA) of 99.99 percent availability.
    -Minimize the effort required to modify the middle-tier API and the back-end tier of the payment processing system.
    -Payment processing system must be able to use grouping and joining tables on encrypted columns.
    -Generate alerts when unauthorized login attempts occur on the middle-tier virtual machines.
    -Ensure that the payment processing system preserves its current compliance status.
    -Host the middle tier of the payment processing system on a virtual machine

    Contoso identifies the following requirements for the historical transaction query system:

    -Minimize the use of on-premises infrastructure services.
    -Minimize the effort required to modify the .NET web service querying Azure Cosmos DB.
    -Minimize the frequency of table scans.
    -If a region fails, ensure that the historical transaction query system remains available without any administrative intervention.

    Information Security Requirements
    The IT security team wants to ensure that identity management is performed by using Active Directory. Password hashes must be stored on-premises only.

    Access to all business-critical systems must rely on Active Directory credentials. Any suspicious authentication attempts must trigger a multi-factor authentication prompt automatically.

    You need to recommend a compute solution for the middle tier of the payment processing system.


    Watch the video: QGIS 3 - No. 65. Working with Microsoft SQL Server (October 2021).