|
Back to Blog
Google Professional Cloud Architect certification questions and exam summary helps you to get focused on the exam. This guide also helps you to be on GCP-PCA exam track to get certified with good score in the final exam. Google (GCP-PCA) Certification Summary
Google (GCP-PCA) Professional Cloud Architect Certification Exam Syllabus 01. Designing and planning a cloud solution architecture 1. Designing a solution infrastructure that meets business requirements. Considerations include:
02. Managing and provisioning a solution Infrastructure 1. Configuring network topologies. Considerations include:
03. Designing for security and compliance 1. Designing for security. Considerations include:
04. Analyzing and optimizing technical and business processes 1. Analyzing and defining technical processes. Considerations include:
05. Managing implementation 1. Advising development/operation team(s) to ensure successful deployment of the solution. Considerations include:
06. Ensuring solution and operations reliability 1. Monitoring/logging/profiling/alerting solution 2. Deployment and release management 3. Assisting with the support of solutions in operation 4. Evaluating quality control measures Google Professional Cloud Architect (GCP-PCA) Certification Questions 01. You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database backend. You want to store the credentials securely. Where should you store the credentials? a) In the source code b) In an environment variable c) In a key management system d) In a config file that has restricted access through ACLs e) In a secret management system 02. Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process? a) Create a scalable environment in GCP for simulating production load. b) Use the existing infrastructure to test the GCP-based backend at scale. c) Build stress tests into each component of your application and use resources from the already deployed production backend to simulate load. d) Create a set of static environments in GCP to test different levels of load—for example, high, medium, and low. 03. Today, TerramEarth maintenance workers receive interactive performance graphs for the last 24 hours (86,400 events) by plugging their maintenance tablets into the vehicle. The support group wants support technicians to view this data remotely to help troubleshoot problems. You want to minimize the latency of graph loads. How should you provide this functionality? a) Execute queries against data stored in a Cloud SQL. b) Execute queries against data indexed by vehicle_id.timestamp in Cloud Bigtable. c) Execute queries against data stored on daily partitioned BigQuery tables. d) Execute queries against BigQuery with data stored in Cloud Storage via BigQuery federation. 04. Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup. Which two steps should they take? a) Load logs into BigQuery. b) Load logs into Cloud SQL. c) Import logs into Stackdriver. d) Insert logs into Cloud Bigtable. e) Upload log files into Cloud Storage. 05. Because you do not know every possible future use for the data TerramEarth collects, you have decided to build a system that captures and stores all raw data in case you need it later. How can you most cost-effectively accomplish this goal? a) Have the vehicles in the field continue to dump data via FTP, and adjust the existing Linux machines to immediately upload it to Cloud Storage with gsutil. b) Have the vehicles in the field pass the data to Cloud Pub/Sub and dump it into a Cloud Dataproc cluster that stores data in Apache Hadoop Distributed File System (HDFS) on persistent disks. c) Have the vehicles in the field continue to dump data via FTP, adjust the existing Linux machines, and use a collector to upload them into Cloud Dataproc HDFS for storage. d) Have the vehicles in the field stream the data directly into BigQuery. 06. To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department. Which two steps should you take? a) Use persistent disks to store the state. Start and stop the VM as needed. b) Use the --auto-delete flag on all persistent disks before stopping the VM. c) Apply VM CPU utilization label and include it in the BigQuery billing export. d) Use BigQuery billing export and labels to relate cost to groups. e) Store all state in local SSD, snapshot the persistent disks, and terminate the VM. f) Store all state in Cloud Storage, snapshot the persistent disks, and terminate the VM. 07. Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs. What should they do? a) Configure a new load balancer for the new version of the API. b) Reconfigure old clients to use a new endpoint for the new API. c) Have the old API forward traffic to the new API based on the path. d) Use separate backend services for each API path behind the load balancer. 08. You analyzed TerramEarth’s business requirement to reduce downtime and found that they can achieve a majority of time saving by reducing customers’ wait time for parts. You decided to focus on reduction of the 3 weeks’ aggregate reporting time. Which modifications to the company’s processes should you recommend? a) Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics. b) Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics. c) Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics. d) Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor. 09. The database administration team has asked you to help them improve the performance of their new database server running on Compute Engine. The database is used for importing and normalizing the company’s performance statistics. It is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD zonal persistent disk. What should they change to get better performance from this system in a cost-effective manner? a) Increase the virtual machine’s memory to 64 GB. b) Create a new virtual machine running PostgreSQL. c) Dynamically resize the SSD persistent disk to 500 GB. d) Migrate their performance metrics warehouse to BigQuery. 10. Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others. Network traffic should flow through the web to the API tier, and then on to the database tier. Traffic should not flow between the web and the database tier. How should you configure the network? a) Add each tier to a different subnetwork. b) Set up software-based firewalls on individual VMs. c) Add tags to each tier and set up routes to allow the desired traffic flow. d) Add tags to each tier and set up firewall rules to allow the desired traffic flow. Answers: Question: 01: Answer: e Question: 02: Answer: a Question: 03: Answer: b Question: 04: Answer: a, e Question: 05: Answer: a Question: 06: Answer: a, d Question: 07: Answer: d Question: 08: Answer: c Question: 09: Answer: c Question: 10: Answer: d How to Register for Professional Cloud Architect Certification Exam? ● Visit site for Register Professional Cloud Architect Certification Exam.
0 Comments
Read More
Back to Blog
Google Professional Machine Learning Engineer certification questions and exam summary helps you to get focused on the exam. This guide also helps you to be on GCP-PMLE exam track to get certified with good score in the final exam. Google (GCP-PMLE) Certification Summary
Google (GCP-PMLE) Professional Machine Learning Engineer Certification Exam Syllabus 01. Framing ML problems Translating business challenges into ML use cases. Considerations include: - Choosing the best solution (ML vs. non-ML, custom vs. pre-packaged [e.g., AutoML, Vision API]) based on the business requirements - Defining how the model output should be used to solve the business problem - Deciding how incorrect results should be handled - Identifying data sources (available vs. ideal) Defining ML problems. Considerations include: - Problem type (e.g., classification, regression, clustering) - Outcome of model predictions - Input (features) and predicted output format Defining business success criteria. Considerations include: - Alignment of ML success metrics to the business problem - Key results - Determining when a model is deemed unsuccessful Identifying risks to feasibility of ML solutions. Considerations include: - Assessing and communicating business impact - Assessing ML solution readiness - Assessing data readiness and potential limitations - Aligning with Google's Responsible AI practices (e.g., different biases) 02. Architecting ML solutions Designing reliable, scalable, and highly available ML solutions. Considerations include: - Choosing appropriate ML services for the use case (e.g., Cloud Build, Kubeflow) - Component types (e.g., data collection, data management) - Exploration/analysis - Feature engineering - Logging/management - Automation - Orchestration - Monitoring - Serving Choosing appropriate Google Cloud hardware components. Considerations include: - Evaluation of compute and accelerator options (e.g., CPU, GPU, TPU, edge devices) Designing architecture that complies with security concerns across sectors/industries. Considerations include: - Building secure ML systems (e.g., protecting against unintentional exploitation of data/model, hacking) - Privacy implications of data usage and/or collection (e.g., handling sensitive data such as Personally Identifiable Information [PII] and Protected Health Information [PHI]) 03. Designing data preparation and processing systems Exploring data (EDA). Considerations include: - Visualization - Statistical fundamentals at scale - Evaluation of data quality and feasibility - Establishing data constraints (e.g., TFDV) Building data pipelines. Considerations include: - Organizing and optimizing training datasets - Data validation - Handling missing data - Handling outliers - Data leakage Creating input features (feature engineering). Considerations include: - Ensuring consistent data pre-processing between training and serving - Encoding structured data types - Feature selection - Class imbalance - Feature crosses - Transformations (TensorFlow Transform) 04. Developing ML models Building models. Considerations include: - Choice of framework and model - Modeling techniques given interpretability requirements - Transfer learning - Data augmentation - Semi-supervised learning - Model generalization and strategies to handle overfitting and underfitting Training models. Considerations include: - Ingestion of various file types into training (e.g., CSV, JSON, IMG, parquet or databases, Hadoop/Spark) - Training a model as a job in different environments - Hyperparameter tuning - Tracking metrics during training - Retraining/redeployment evaluation Testing models. Considerations include: - Unit tests for model training and serving - Model performance against baselines, simpler models, and across the time dimension - Model explainability on AI Platform Scaling model training and serving. Considerations include: - Distributed training - Scaling prediction service (e.g., AI Platform Prediction, containerized serving) 05. Automating and orchestrating ML pipelines Designing and implementing training pipelines. Considerations include: - Identification of components, parameters, triggers, and compute needs (e.g., Cloud Build, Cloud Run) - Orchestration framework (e.g., Kubeflow Pipelines/AI Platform Pipelines, Cloud Composer/Apache Airflow) - Hybrid or multicloud strategies - System design with TFX components/Kubeflow DSL Implementing serving pipelines. Considerations include: - Serving (online, batch, caching) - Google Cloud serving options - Testing for target performance - Configuring trigger and pipeline schedules Tracking and auditing metadata. Considerations include: - Organizing and tracking experiments and pipeline runs - Hooking into model and dataset versioning - Model/dataset lineage 06. Monitoring, optimizing, and maintaining ML solutions Monitoring and troubleshooting ML solutions. Considerations include: - Performance and business quality of ML model predictions - Logging strategies - Establishing continuous evaluation metrics (e.g., evaluation of drift or bias) - Understanding Google Cloud permissions model - Identification of appropriate retraining policy - Common training and serving errors (TensorFlow) - ML model failure and resulting biases Tuning performance of ML solutions for training and serving in production. Considerations include: - Optimization and simplification of input pipeline for training - Simplification techniques Google Professional Machine Learning Engineer (GCP-PMLE) Certification Questions 01. You work for a textile manufacturer and have been asked to build a model to detect and classify fabric defects. You trained a machine learning model with high recall based on high resolution images taken at the end of the production line. You want quality control inspectors to gain trust in your model. Which technique should you use to understand the rationale of your classifier? a) Use K-fold cross validation to understand how the model performs on different test datasets. b) Use the Integrated Gradients method to efficiently compute feature attributions for each predicted image. c) Use PCA (Principal Component Analysis) to reduce the original feature set to a smaller set of easily understood features. d) Use k-means clustering to group similar images together, and calculate the Davies-Bouldin index to evaluate the separation between clusters. 02. You are an ML engineer at a media company. You want to use machine learning to analyze video content, identify objects, and alert users if there is inappropriate content. Which Google Cloud products should you use to build this project? a) Pub/Sub, Cloud Function, Cloud Vision API b) Pub/Sub, Cloud IoT, Dataflow, Cloud Vision API, Cloud Logging c) Pub/Sub, Cloud Function, Video Intelligence API, Cloud Logging d) Pub/Sub, Cloud Function, AutoML Video Intelligence, Cloud Logging 03. You work for a large financial institution that is planning to use Dialogflow to create a chatbot for the company’s mobile app. You have reviewed old chat logs and tagged each conversation for intent based on each customer’s stated intention for contacting customer service. About 70% of customer inquiries are simple requests that are solved within 10 intents. The remaining 30% of inquiries require much longer and more complicated requests. Which intents should you automate first? a) Automate a blend of the shortest and longest intents to be representative of all intents. b) Automate the more complicated requests first because those require more of the agents’ time. c) Automate the 10 intents that cover 70% of the requests so that live agents can handle the more complicated requests. d) Automate intents in places where common words such as “payment” only appear once to avoid confusing the software. 04. You work for a large retailer. You want to use ML to forecast future sales leveraging 10 years of historical sales data. The historical data is stored in Cloud Storage in Avro format. You want to rapidly experiment with all the available data. How should you build and train your model for the sales forecast? a) Load data into BigQuery and use the ARIMA model type on BigQuery ML. b) Convert the data into CSV format and create a regression model on AutoML Tables. c) Convert the data into TFRecords and create an RNN model on TensorFlow on AI Platform Notebooks. d) Convert and refactor the data into CSV format and use the built-in XGBoost algorithm on AI Platform Training. 05. You work for a gaming company that develops and manages a popular massively multiplayer online (MMO) game. The game’s environment is open-ended, and a large number of positions and moves can be taken by a player. Your team has developed an ML model with TensorFlow that predicts the next move of each player. Edge deployment is not possible, but low-latency serving is required. How should you configure the deployment? a) Use a Cloud TPU to optimize model training speed. b) Use AI Platform Prediction with a NVIDIA GPU to make real-time predictions. c) Use AI Platform Prediction with a high-CPU machine type to get a batch prediction for the players. d) Use AI Platform Prediction with a high-memory machine type to get a batch prediction for the players. 06. Your team is using a TensorFlow Inception-v3 CNN model pretrained on ImageNet for an image classification prediction challenge on 10,000 images. You will use AI Platform to perform the model training. What TensorFlow distribution strategy and AI Platform training job configuration should you use to train the model and optimize for wall-clock time? a) Default Strategy; Custom tier with a single master node and four v100 GPUs. b) One Device Strategy; Custom tier with a single master node and four v100 GPUs. c) One Device Strategy; Custom tier with a single master node and eight v100 GPUs. d) MirroredStrategy; Custom tier with a single master node and four v100 GPUs. 07. You need to build an object detection model for a small startup company to identify if and where the company’s logo appears in an image. You were given a large repository of images, some with logos and some without. These images are not yet labelled. You need to label these pictures, and then train and deploy the model. What should you do? a) Use Google Cloud’s Data Labelling Service to label your data. Use AutoML Object Detection to train and deploy the model. b) Use Vision API to detect and identify logos in pictures and use it as a label. Use AI Platform to build and train a convolutional neural network. c) Create two folders: one where the logo appears and one where it doesn’t. Manually place images in each folder. Use AI Platform to build and train a convolutional neural network. d) Create two folders: one where the logo appears and one where it doesn’t. Manually place images in each folder. Use AI Platform to build and train a real time object detection model. 08. You need to write a generic test to verify whether Dense Neural Network (DNN) models automatically released by your team have a sufficient number of parameters to learn the task for which they were built. What should you do? a) Train the model for a few iterations, and check for NaN values. b) Train the model for a few iterations, and verify that the loss is constant. c) Train a simple linear model, and determine if the DNN model outperforms it. d) Train the model with no regularization, and verify that the loss function is close to zero. 09. You work on a team where the process for deploying a model into production starts with data scientists training different versions of models in a Kubeflow pipeline. The workflow then stores the new model artifact into the corresponding Cloud Storage bucket. You need to build the next steps of the pipeline after the submitted model is ready to be tested and deployed in production on AI Platform. How should you configure the architecture before deploying the model to production? a) Deploy model in test environment -> Evaluate and test model -> Create a new AI Platform model version b) Validate model -> Deploy model in test environment -> Create a new AI Platform model version c) Create a new AI Platform model version -> Evaluate and test model -> Deploy model in test environment d) Create a new AI Platform model version - > Deploy model in test environment -> Validate model 10. You work for a manufacturing company that owns a high-value machine which has several machine settings and multiple sensors. A history of the machine’s hourly sensor readings and known failure event data are stored in BigQuery. You need to predict if the machine will fail within the next 3 days in order to schedule maintenance before the machine fails. Which data preparation and model training steps should you take? a) Data preparation: Daily max value feature engineering; Model training: AutoML classification with BQML b) Data preparation: Daily min value feature engineering; Model training: Logistic regression with BQML and AUTO_CLASS_WEIGHTS set to True c) Data preparation: Rolling average feature engineering; Model training: Logistic regression with BQML and AUTO_CLASS_WEIGHTS set to False d) Data preparation: Rolling average feature engineering; Model training: Logistic regression with BQML and AUTO_CLASS_WEIGHTS set to True Answers: Question: 01: Answer: b Question: 02: Answer: c Question: 03: Answer: c Question: 04: Answer: a Question: 05: Answer: b Question: 06: Answer: d Question: 07: Answer: a Question: 08: Answer: d Question: 09: Answer: a Question: 10: Answer: d How to Register for Professional Machine Learning Engineer Certification Exam? ● Visit site for Register Professional Machine Learning Engineer Certification Exam.
Back to Blog
Google Professional Collaboration Engineer certification questions and exam summary helps you to get focused on the exam. This guide also helps you to be on GCP-PCE exam track to get certified with good score in the final exam. Google (GCP-PCE) Certification Summary
Google (GCP-PCE) Professional Collaboration Engineer Certification Exam Syllabus 01. Planning and implementing Google Workspace authorization and access Implementing authorization policies. Considerations include: - Basic user security controls (e.g., password length enforcement and 2-Step verification) - Security aspects of identity, perimeter security, and data protection Using Google Workspace as a service provider. Considerations include: - Configuring third-party SSO for Google Workspace - Integrating with third party for provisioning Using Google Workspace as an identity provider. Considerations include: - Configuring and managing SSO for common third-party applications - Configuring and managing provisioning Managing access to third-party applications and sites. Activities include: - Granting API access to applications that need access - Revoking third-party OAuth access - Removing connected applications and sites 02. Managing user, resource, and shared drive life cycles Managing users. Considerations include: - Adding users (e.g., individual, bulk, automated) - Removing users (e.g., suspending, deleting, recovering) - Transferring user data from one user to another - Editing user attributes (e.g., renaming, passwords, aliases) - Creating administrative roles (e.g., default roles, custom roles) - Managing user licenses (e.g., licensing models, Google Workspace SKUs) - Troubleshooting conflicting accounts - Implications of current Google Workspace APIs to development efforts - Using Google Apps Script to automate tasks Synchronizing data in your Google domain with your Microsoft® Active Directory® or LDAP server. Considerations include: - Integrating LDAP with Google Workspace - Configuring and troubleshooting GSPS and GCDS - Implications of current Google Workspace APIs to development efforts - Using Apps Script to automate tasks Managing organizational structure. Considerations include: - Designing efficient organizational unity (OU) structure based on business needs - Assigning users to relevant OUs - Modifying OU policies - Implications of current Google Workspace APIs to development efforts - Using Apps Script to automate tasks - Managing and verifying domains - Using Google Takeout to export data - Managing company profile settings Managing groups. Considerations include: - Configuring Google Groups - Adding users to groups - Implications of current Google Workspace APIs to development efforts - Using Apps Script to automate tasks Managing contacts. Considerations include: - Creating contacts - Sharing contacts - Implications of current Google Workspace APIs to development efforts - Using Apps Script to automate tasks 03. Managing mail Managing mail-related DNS settings. Activities include: - Managing domain MX records - Managing domain SPF records - Managing domain DKIM records - Managing domain DMARC records Diagnosing and resolving mail routing issues. Considerations include: - Analyzing mail flow - Analyzing email headers - Email log search - Disparate email services - Using Google Workspace Toolbox Configuring and managing security, compliance, and spam rules. Activities include: - Configuring attachment compliance - Configuring blocked senders - Configuring email allow list - Configuring objectionable content - Configuring phishing settings - Configuring spam settings - Managing admin quarantine - Configuring secure transport compliance - Configuring safety settings Configuring mail routing rules. Considerations include: - Configuring split and dual delivery scenarios - Implications of integrating third-party mail services - Configuring routing rules - Configuring recipient map - Configuring non-Gmail mailbox - Configuring hosts Configuring general mail settings. Activities include: - Configuring append footer setting - Configuring forwarding - Configuring SMTP relay - Enabling email delegation for an OU - Disabling IMAP and POP - Managing Gmail archives 04. Controlling and configuring Google Workspace services: Administering Google Workspace Services. Activities include: - Managing rollout of new Google functionality to end users - Troubleshooting Google Workspace services (e.g., performance issues for services suite, Google Workspace apps for OUs) - Configuring services Configuring and managing Google Workspace core apps. Considerations include: - Contacts/Directory (e.g., ability to make updates to Directory services or contact support or contacts sharing settings) - Calendar (e.g., Calendar sharing settings and delegations, Calendar resource management, Calendar invitations sent to Groups, troubleshooting calendar interoperability) - Drive/Shared Drive (e.g., storage limitations in Basic plan) - Groups (e.g., creating and editing groups, banning group members from posting to the group) - Chat/Meet (e.g., disabling Meet voice calls for organization, designing video conferencing integration of Meet with different end points, disabling bot access on new chats) - Sites (e.g., usage and implementation) Managing services integrations. Activities include: - Integrating third-party marketplace apps to specific OUs in Google Workspace - Evaluating Marketplace and Connected app landscape - Adding SP to Cloud Identity Implementing automation. Considerations include: - API permissions - Able to interact with APIs and lightweight scripting - Service Accounts 05. Configuring and managing content access Configuring and managing Vault. Activities include: - Setting retention rules (e.g., setting retention rules, placing legal holds, searching your domain’s data by user account, OU, date, or keyword, exporting data for additional processing and review, auditing reports) - Holding and exporting data - Running Vault audit reports Configuring and managing Drive and Shared Drive. Activities include: - Configuring sharing settings (e.g., updating sharing settings to external domains, controlling file access with Information Rights Management) - Managing Drive folder structure (e.g., using Google Drive native file formats, recommending Google Drive and Shared Drive file structures) Ensuring compliance with regulatory requirements. Activities include: - Scanning email with Data Loss Prevention (DLP) - Managing content compliance rules Implementing and monitoring data security. Activities include: - Configuring security and data region - Monitoring security health check - Configuring security settings - Creating security records - Designing security integration and addressing objections. Managing third-party applications and access. Activities include: - Backing up solutions of Google Workspace data for compliance - Allow list OAuth apps 06. Configuring and managing endpoint access Configuring mobile devices. Considerations include: - Company-owned vs. personal devices - Configuring personal device settings (e.g., password, Android, iOS, advanced, device approvals, app management, insights) Provisioning, deprovisioning, and enabling Chrome devices Managing Google meeting room hardware (e.g., provisioning, deprovisioning, hanging up, calling, rebooting) Managing Chrome apps, extensions, and Android apps Configuring network settings (e.g., Wi-Fi, Ethernet, VPN, certificates, and general) 07. Monitoring organizational operations Setting up and using reports (e.g., creating usage reports, measuring adoption and satisfaction) Setting up and using audits (e.g., auditing document sharing, auditing Gmail and drive, auditing users) Interpreting report and audit data and taking appropriate action. Examples include: - Troubleshooting and escalating support issues to Google Partner support or Google Engineering - Troubleshooting error logs - Identifying data leakage scenarios and implementing ways to prevent - Streaming audit data to BigQuery Managing alerts (e.g., suspicious logging activity, apps outages, TLS failure, user deleted) 08. Advancing Google Workspace adoption and collaboration Building business solutions and processes. Examples include: - Creating simple processes using Apps Script - Creating team project sites Staying up-to-date with Google Workspace. Examples include: - Using Google and partner support channels - Accessing and interpreting Google Workspace roadmap - Staying aware of new features and functionality in Google Workspace Google Professional Collaboration Engineer (GCP-PCE) Certification Questions 01. A company has reports of users sharing sensitive Google Drive content outside their domain through third-party add-ons. You need to control which third-party apps are allowed to access users' G Suite data. Which security feature should you use to achieve this? a) OAuth Whitelisting b) Configure DLP policies to prevent sharing of sensitive content with external parties. c) Block specific API scopes for each user. d) In the Drive SDK section, clear 'Allow users to access Google Drive with the Drive SDK API.' 02. A company needs to create a Google group for the customer service team. The members in that group should be able to assign and track received messages, mark a topic as resolved, and add/edit tags to a topic. What group type should you use? a) Web forum b) Email List c) Q&A Forum d) Collaborative Inbox 03. Your-company. com is currently migrating to Google Workspace. Some legacy applications are still using an on-premises exchange server to send emails. You enabled the SMTP Relay service in Google to route the messages. During an investigation it was determined that these messages are not discoverable in Google Vault. For compliance reasons, the Legal team is requiring that these messages are retained and discoverable. What should you do? a) Add the Exchange Server’s IP as an Inbound Gateway. b) Enable comprehensive mail storage. c) Create a Content Compliance rule to forward a copy of every message to a Google Group. d) Enable Gmail forwarding for exchange server. 04. An organization is pushing for an effective way to manage how users access corporate data from mobile devices. A recent change to the organization's wireless settings is allowing WiFi access to users who have personal devices but preventing them from accessing corporate applications and data sources. Users with company-owned devices are not experiencing the same issue. You are tasked with troubleshooting this issue. What should you do? a) Enable Advanced Mobile Management and approve the device. b) Disable Advanced Mobile Management and activate the device. c) Enable Advanced Mobile Management and unblock the device. d) Disable Advanced Mobile Management and approve the device. 05. Your compliance officers want to implement a new retention policy. Email will be retained for only 180 days for most users except for VIPs, who need to retain some messages indefinitely. Your VIPs' mail is already in a separate sub-organizational unit called VIPs. Which two configurations would meet your retention needs? (Choose two.) a) Create a custom retention rule for the root OU of 180 days. b) Create a custom retention rule for the VIP OU of indefinite. c) Create a default retention rule of 180 days. d) Create a custom retention rule for the VIP OU to indefinitely retain messages with a given label. e) Create a default retention rule for the VIP OU to indefinitely retain messages with a given label. 06. The organization is concerned with third-party applications accessing contact information. As a G Suite Super Admin, you are tasked to restrict third-party access without limiting users' ability to share contacts manually. What should you do? a) Disable Contact Sharing. b) Disable API access to Google Contacts and enable Directory Sharing. c) Enable API access to Google Contacts and disable Directory Sharing. d) Enable Contact Sharing. 07. External Company is reporting that they are not receiving messages from your users. Your users are reporting that everything is sending fine and they are not receiving bounceback messages or any notifications. You need to determine what could be causing the non-delivery and why they aren't receiving the notifications. What should you do? a) Ask other customers on Cloud Connect Community if they are experiencing outages. b) Using MX Toolbox, ensure that your SPF, DKIM, and DMARC records are up to date. c) Review the affected sent messages in the email audit log. d) Connect to the user’s mailbox and review the headers using the Google Workspace Toolbox. 08. Your Communications and Training Department has a Google Site that provides updated critical business information to all employees. They want to learn how often the site is being visited and how it is used. What should you do? a) Embed a JavaScript page counter showing usage statistics. b) Export the Apps Usage Activity Report showing Sites activity and send the daily report to the Communications and Training Department. c) Add a Google Analytics Web Property ID to the Site. d) Export the Drive Audit Log filtered to show Site Views. 09. Your company has purchased a new six-story building that has 20 meeting rooms of various sizes. One of the meeting rooms is an executive conference room that only one person should be able to see and book. You have created that executive conference room in the Google Workspace > Calendar > Resources menu and need to restrict the sharing settings for that executive conference room. What two actions should you take? a) Delete the resource and create the meeting room as a secondary calendar on the person’s Calendar account. b) Show the meeting room as busy all the time so it never shows up as an available room. c) Access the Settings of the Resource to assign the person permission to make changes. d) Clear the options under Access Permissions in the Settings of the Resource so no one else has access. e) Show the person how to monitor meetings scheduled in the room and how to cancel them. 10. Your company uses Google Workspace and has acquired a subsidiary that, for business reasons, will remain indefinitely on its existing third-party collaboration platform and legacy LDAP system. This subsidiary operates autonomously with a separate, unfederated Active Directory forest. It is anticipated that interaction between the two companies will be infrequent and primarily conducted via email. Leadership’s minimum requirement is adding employees of that subsidiary to your corporate global address book (GAL). What should you do? a) Configure GCDS on the subsidiary LDAP to provision their users with Cloud Identity licenses on the parent domain. b) Create a script that uses the Directory API to sync the subsidiary’s contact list as shared contacts. c) Publish a CSV file containing the subsidiary’s directory for your users to upload into Google Contacts. d) Provision the subsidiary users with G Suite accounts on the parent domain for the additional benefit of allowing collaboration in Drive. Answers: Question: 01: Answer: a Question: 02: Answer: d Question: 03: Answer: b Question: 04: Answer: a Question: 05: Answer: c, d Question: 06: Answer: b Question: 07: Answer: c Question: 08: Answer: c Question: 09: Answer: c, d Question: 10: Answer: b How to Register for Professional Collaboration Engineer Certification Exam? ● Visit site for Register Professional Collaboration Engineer Certification Exam. |