C_BODI_20 SAP Certified Application Associate SAP BO Test Set 1

You need to use an extremely web service within data integrator. What information will you needto configure data integrator to use this web service?




Options are :

  • Document Type definition (DTD)
  • Web service URL
  • XML schema definition (XSD)
  • Web Service Definition Language (WSDL) (Correct)

Answer : Web Service Definition Language (WSDL)

SAP C_TADM51702 Certified Technology Associate - System Exam Set 7

Some of your incoming data are rejected by the database table because of conversion errors andprimary key violations. You want to edit and load the failed data rows manually using the SQLQuery tool. How can you perform this action?



Options are :

  • In the data flow properties, select "SQL Exception file" and enter the filename.
  • Use the SQL contained in the error log file in the "BusinessObject/data integration/logs…"directory.
  • In the target table editor select "use overflow file", select "write SQL"and enter the filename. (Correct)
  • In the job properties select "trace SQL_Errors" and copy the failed SQL command from the job trace log

Answer : In the target table editor select "use overflow file", select "write SQL"and enter the filename.

You want to join a "sales", "customer" and "product" table. Each table resides on a different datastore and will not pushdown to one SQL command. The "sales" table contains approximately fivemillions rows. The "customer" table contains approximately five thousand rows. The "product"table contains fifty records.How would you set the source table options to maximize performance of this operation?



Options are :

  • Set the sales table joins rank to 20 and cache to "No". Set the customer table joins rank to 20 and cache to "yes". Then set the product table join rank to 10 and cache to "yes".
  • Set the sales table joins rank to 10 and cache to "No". Set the customer table joins rank to 20 and cache to "yes". Then set the product table join rank to 30 and cache to "yes".
  • Set the sales table joins rank to 20 and cache to "No". Set the customer table joins rank to 10 and cache to "yes". Then set the product table join rank to 10 and cache to "yes".
  • Set the sales table joins rank to 30 and cache to "No". Set the customer table joins rank to 20 and cache to "yes". Then set the product table join rank to 10 and cache to "yes". (Correct)

Answer : Set the sales table joins rank to 30 and cache to "No". Set the customer table joins rank to 20 and cache to "yes". Then set the product table join rank to 10 and cache to "yes".

Your table contains the "sales_date" and "sales-time" fields both fields are data type archer (20).The "sales_date" format is 21-jan- 1980'. The "sales_time" format is '18-30-12'. You need tocombine both fields and load the results into a single Target field of the data time data type. Whichexpression must you use to perform the conversion?

A. to_data(sales_datell' 'll sales_time.'dd_mon-yyyy hh24:mi:ss')

B. to_data(sales_date &' '& sales_time.'dd_mon-yyyy hh24:mi:ss')

C. to_data(sales_datell' 'll sales_time.'dd_mmm-yyyy hh24:mi:ss')

D. to_data(sales_date&' '& sales_time.'dd_mmmm-yyyy hh24:mi:ss')



Options are :

  • C,D
  • A,B (Correct)
  • B,C
  • D,A

Answer : A,B

SAP C_TSCM52_66 Certified Associate Financial Accounts Test Set 4

A system configuration allows you to group data store configurations together in the followingsetup?



Options are :

  • Single configuration for a single data store.
  • Single configuration from multiple data store. (Correct)
  • Multiple data store configurations for multiple data stores.
  • Multiple data store configurations for single data stores.

Answer : Single configuration from multiple data store.

Which two interfaces requires an adapter data store? (Choose two).

A. COBOL copybook

B. MQ series

C. Sap R/3

D. Web service



Options are :

  • B,C
  • A,B
  • C,D
  • B,D (Correct)

Answer : B,D

How do you set the degree of parallelism value at which a data flown use?



Options are :

  • Select each transform in the data flow select properties and enter a number for the degree of parallelism.
  • Select the target table editor and enter a number for the "Number of loaders".
  • Right-click data flow. Select properties and enter a number for the degree of parallelism. (Correct)
  • Right-click work flow select properties and enter a number for the "Degree of parallelism.

Answer : Right-click data flow. Select properties and enter a number for the degree of parallelism.

SAP C_TSCM52_66 Certified Associate Financial Accounts Test Set 1

Which three combinations of input and output schemas are permitted in on embedded data flow?(Choose three)

A. 1 input and 0 outputs

B. 0 input and 1 outputs

C. 1 input and 1 outputs

D. 1 input and multiple outputs

E. Multiple input and multiple outputs



Options are :

  • D,E,A
  • B,C,D
  • A,B,C (Correct)
  • C,D,E

Answer : A,B,C

Which two tasks must you perform to ensure the businessObject universe Bolder transfers thecorrect metadata lineage information? (Choose two)

A. Calculate usage dependencies

B. Calculate column mappings

C. Calculate column dependencies

D. Calculate table mappings



Options are :

  • A,B (Correct)
  • D,A
  • B,C
  • C,D

Answer : A,B

You load over 10,000.000 records from the "customer" source table into a staging are a. You needto remove the duplicate customer during the loading of the source table. You do not nee tot recordor audit the duplicates. Which two do-duplicating techniques will ensure that best performance?(Choose two.)

A. Use a Query transform to order the incoming data set an use the previous_row-value in the

where clause to filter any duplicate row.

B. Use the Query transform to order the incoming data set. Then a table_comparison transform

with "input contains duplicates" and the "sorted input" options selected.

C. Use tha table_ comparison transform with the "input contains duplicates" and "cached

comparison table" selected.

D. Use the lookup_ext function. With the Pre_load_cache" option selected to test each row for

duplicates.



Options are :

  • B,C
  • D,A
  • A,B (Correct)
  • C,D

Answer : A,B

SAP C_TSCM62_65 Certified Associate - Order Fulfilment Exam Set 3

You create an expression that tests in the "Zip code" field matches a standard format to fivenumeric digits and the value begins with a 1or 2 which expression must you use to do this?



Options are :

  • match_pattern( value.'?[112]9999')
  • match_pattern( value.'?[12]9999')
  • match_pattern( value.'[112]9999')
  • match_pattern( value.'[12]9999') (Correct)

Answer : match_pattern( value.'[12]9999')

Where can the XML_Pipeline transform be used within a data flow? (Choose two)

A. Immediately after an XML source file.

B. Immediately after an XML source message.

C. Immediately after a Query containing nested data.

D. Immediately after an XML template.



Options are :

  • A,B (Correct)
  • B,C
  • D,A
  • C,D

Answer : A,B

Which two data integrator objects/operations support load balancing in a server Group basedarchitecture? (Choose two.)

A. Job

B. Lookup_ext

C. Script

D. While loop



Options are :

  • C,D
  • A,B (Correct)
  • B,C
  • D,A

Answer : A,B

SAP C_TSCM66_66 Certified Associate Logistics Execution Exam Set 6

You need to build a job that reads a file that contains headers and footers. The header recordalways starts with 00. The body records start with 01. The footer record starts with 99. The headerrecord contains customer details. The body records contain sales information. The footer indicatesthe number of rows in the file. The three record types contain different number of fields. You needto use all information in the file for your data flow which technique can your user to interpret thistype of file.



Options are :

  • Create one file format templates and select "yes" to "File contains Header/Footer" and the specify the header and footer markers and use the format in one data flow.
  • Create one format template and three data flows to configure the "ignore row markers" option to interpret the different parts of the file.
  • Create three file format templates one for the header body and footer records. Load the file using three data flows and use the "ignore row marks" to separate out the header body and footer records. (Correct)
  • Create three file format templates for the format the contains the most fields. Use this format in one data flow and use case transform to separate out the header body and footer records.

Answer : Create three file format templates one for the header body and footer records. Load the file using three data flows and use the "ignore row marks" to separate out the header body and footer records.

A global variable is set to restrict the number of rows being returned by the Query transform.Which two methods can you use to ensure the value of the variable is set correctly? (Choose two)

A. Add the variable to a script inside a print statement

B. Use the debugger to view the variable value being set.

C. View the job monitor log for the variable value

D. Place the data-flow in a try catch block



Options are :

  • D,A
  • A,B (Correct)
  • C,D
  • B,C

Answer : A,B

You create a two stage process for transferring data from a source system to a target datawarehouse via a staging area. The job you create runs both processes in an overnight schedule.The job fails at the point of transferring the data from the staging area to the target datawarehouse. During the work day you want to return the job without impacting the source systemand therefore want to just run the second stage of the process to transfer the data from the stagingarea to the data warehouse. How would you design this job?



Options are :

  • Create one data flow which extracts from the source system and populates both the staging area and the target data warehouse.
  • Create one data flow which extracts the data form the source system and uses a data_transfer transform to stage the data in the staging area before then continuing to transfer the data to the target data warehouse.
  • Create two data flows the first extracting the data from the source system the second transferring the data to the target data warehouse. (Correct)
  • Create two data flows the first extracting the data from the source system and uses a data_tranfer transform to write the data to the staging area. The second data flow extracts the data from the staging area and transfers it to the target data warehouse.

Answer : Create two data flows the first extracting the data from the source system the second transferring the data to the target data warehouse.

SAP C_THR12_66 Certified Application Associate - HCM Exam Set 2

Your data integrator environment interprets year values greater than 15 as 1915 instead of 2015.you must ensure data integrator interprets any date from "00 to 90" as "2000 to 2090" withoutmaking direct modifications to the underlying data flow. Which method should you use toaccomplish this task?



Options are :

  • Log into the designer and select tools l Options l data l General and modify the "Century change year" to 90. (Correct)
  • Open the web administration tool and select management l requisiteness edit the production requsitury and modify the "Century change year to 90".
  • Open the server manger and select edit job server config and modify the "Century change year to 90".
  • On the job server, open the windows l control panel l regional settings l Customize data and modify the two digit year interpretation to 90.

Answer : Log into the designer and select tools l Options l data l General and modify the "Century change year" to 90.

Which three column level metadata attributes are transferred to the semantic universe layer ofbusinessobjects Enterprise via the businessobjects universe builder? (Choose three)

A. Business Name

B. Business Description

C. Column_Usage

D. Default values



Options are :

  • A,B,D
  • B,C,D
  • C,D,A
  • A,B,C (Correct)

Answer : A,B,C

Which two object must you use to create a valid real_time job? (Choose two)

A. Data flow that contains an XML-Source-message.

B. Data flow that contains an XML-Target-message.

C. Data flow that contains an XML-Source file and has the "Make Port" option selected

D. Data flow that contains an XML-Target file and has the "Make Port" option selected.



Options are :

  • A,B (Correct)
  • B,C
  • C,D
  • D,A

Answer : A,B

SAP C_TSCM52_66 Certified Associate - Procurement Exam Set 2

You create a job containing two work flows and three data flows. The data flows are singlethreaded and contain no additional function calls or sub data flow operations running as separateprocesses. How many "al_engine"processes will run on the job server?



Options are :

  • 2
  • 4 (Correct)
  • 6
  • 1

Answer : 4

You are using an oracle 10G database for the source and target tables in your data flow. In whichcircumstance will data integrator optimize the SQL to use the oracle "merge" command?



Options are :

  • The "Auto Correct load" option is selected on the Target table. (Correct)
  • The Map_Operation is used to map all items from " normal " to" update now operations
  • A table comparison is used to compare the source with the target
  • The "Use input Keys" option is selected on the target table editor

Answer : The "Auto Correct load" option is selected on the Target table.

You want to join the "sales" and "customer" tables. Both tables reside in different data stores. The"sales" table contains approximately five million rows. The "customer" table Containsapproximately five thousand rows, the join occurs in memory. How would you set the source tableoptions to maximize the performance of the operation?



Options are :

  • Set the sales table joins tank to 10 and the cache to "yes" then set the customer table join tank to 5 and cache to "yes".
  • Set the sales table joins tank to 5 and the cache to "Yes" then set the customer table join tank to 10 and cache to "No".
  • Set the sales table joins tank to 10 and the cache to "No" then set the customer table join tank to 5 and cache to "yes". (Correct)
  • Set the sales table joins tank to 5 and the cache to "No" then set the customer table join tank to 10 and cache to "No".

Answer : Set the sales table joins tank to 10 and the cache to "No" then set the customer table join tank to 5 and cache to "yes".

SAP P_HCMTM_64 Certified Professional - HCM Talent Mngt Exam Set 3

How long is the table data within a persistent cache data store retained?



Options are :

  • Until the execution of the batch job.
  • Until the table is reloaded (Correct)
  • Until the real-time service is restarted
  • Until the job server is restarted.

Answer : Until the table is reloaded

The "full name" field contains the first and last name of each employee separated by a space.Which two expressions can you use to extract only the employee's last name from this field?(Choose two).

A. parse_ext(employee_name,2,' ')

B. parse(employee_name,2,' ')

C. word_ext(employee_name,2,' ')

D. word(employee_name,2,' ')



Options are :

  • C,D (Correct)
  • B,C
  • D,A
  • A,B

Answer : C,D

Your table contains the "sales_date" and "sales-time" fields both fields are data type archer (20).The "sales_date" format is 21-jan- 1980'. The "sales_time" format is '18-30-12'. You need tocombine both fields and load the results into a single Target field of the data time data type. Whichexpression must you use to perform the conversion?



Options are :

  • to_data(sales_datell' 'll sales_time.'dd_mmm-yyyy hh24:mi:ss')
  • to_data(sales_date &' '& sales_time.'dd_mon-yyyy hh24:mi:ss')
  • to_data(sales_date&' '& sales_time.'dd_mmmm-yyyy hh24:mi:ss')
  • to_data(sales_datell' 'll sales_time.'dd_mon-yyyy hh24:mi:ss') (Correct)

Answer : to_data(sales_datell' 'll sales_time.'dd_mon-yyyy hh24:mi:ss')

SAP C_TB1200_88 Certified Associate – SAP Business One Exam Set 6

Which lookup function returns multiple columns?



Options are :

  • . Lookup _Adv
  • Lookup
  • Lookup_Seq
  • Lookup_Ext (Correct)

Answer : Lookup_Ext

C_TFIN52_05 SAP Financial Accounting Consultant PracticeTest Set 1

Which two items are included on the Operational Dashboards? (Choose two)

A. Job Execution Statistics History

B. Job Execution Duration History

C. Job Schedule History

D. Job Server Resource Utilization History



Options are :

  • D,A
  • A,B (Correct)
  • B,C
  • C,D

Answer : A,B

You need to merge two tables from heterogeneous data sources and apply multiple joinconditions. The tables contain very large data volumes. What is the recommended method tomaximize performance and reduce system resources?



Options are :

  • Use the Merge transform to merge the data into a single Table before applying the join conditions
  • Use the Data_Transfer transform to stage the tables in the same database before applying the join conditions (Correct)
  • Use Multiple Flat Files to stage the tables, then join the flat files using a Query trans
  • . Use the Query transform to join the two tables by applying the join conditions.

Answer : Use the Data_Transfer transform to stage the tables in the same database before applying the join conditions

Which three applications can you use to schedule Data Integrator batch jobs? (Choose three)

A. Third party scheduling applications

B. BusinessObjects Enterprise Scheduler

C. Data Integrator Scheduler

D. Data Integrator Designer



Options are :

  • C,D,A
  • A,B,C (Correct)
  • B,C,D
  • A,B,D

Answer : A,B,C

SAP C_HANATEC_1 Certified Technology Associate Practice Exam Set 1

Your data flow loads the contents of "order_details "and" order_headers "into one XML file thatcontains a node <HEADER> and a child mode <DETAIL>. How should you populate the structurein your Query?



Options are :

  • In the HEADER schema use the order_headers for from and put order_header.order_id = order details.order_id in where clause. In the detail schema use the order_details for from and leave where empty.
  • In the header schema use the order_header for from and leave where empty in the detail schema use the order_header for from and put order_header order_id = order_details order_id in where clause.
  • In the header schema use the order_headers for from, and in where clause put order_header order_id = order details order_id in the detail schema use the order_header, order_details for from and leave where empty.
  • In the HEADER schema use the order headers for from and leave where empty in the detail schema use the order details for From and Where clause put order_header_id =order_details order_id. (Correct)

Answer : In the HEADER schema use the order headers for from and leave where empty in the detail schema use the order details for From and Where clause put order_header_id =order_details order_id.

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions