C_BODI_20 SAP Business Objects Data Integrator XI R2 Test Set 4

You have a Job open in the workspace. Which three objects on the current Job workspace can you validate by selecting the “Validate All” option


Options are :

  • Data
  • WorkFlow
  • Data Flow
  • Script

Answer : WorkFlow Data Flow Script

You create a job containing two work flows and three data flows. The data flows are single threaded and contain no additional function calls or sub data flow operations running as separate processes. How many "al_engine"processes will run on the job server?


Options are :

  • Two
  • Six
  • Four
  • One

Answer : Four

You load over 10,000.000 records from the "customer" source table into a staging are a. You need to remove the duplicate customer during the loading of the source table. You do not nee tot record or audit the duplicates. Which two do-duplicating techniques will ensure that best performance


Options are :

  • Use a Query transform to order the incoming data set an use the previous_row-value in the where clause to filter any duplicate row
  • Use the Query transform to order the incoming data set. Then a table_comparison transform with "input contains duplicates" and the "sorted input" options selected
  • Use the lookup_ext function. With the Pre_load_cache" option selected to test each row for duplicates.
  • Use tha table_ comparison transform with the "input contains duplicates" and "cached comparison table" selected

Answer : Use a Query transform to order the incoming data set an use the previous_row-value in the where clause to filter any duplicate row Use the Query transform to order the incoming data set. Then a table_comparison transform with "input contains duplicates" and the "sorted input" options selected

You create a real-time job that processes data from an external application. Which two mechanisms enable the external application to send/receive messages to the real-time job?


Options are :

  • E-mail
  • Adapter instance
  • File on shared server
  • Web service call

Answer : Adapter instance Web service call

C_BODI_20 SAP Certified Application Associate SAP BO Test Set 1

Which three combinations of input and output schemas are permitted in on embedded data flow


Options are :

  • 1 input and multiple outputs
  • 0 input and 1 outputs
  • 1 input and 1 outputs
  • 1 input and 0 outputs

Answer : 0 input and 1 outputs 1 input and 1 outputs 1 input and 0 outputs

Which three objects can you use as a source in a Data Flow?


Options are :

  • Excel file
  • Template table
  • Row_Generation transform
  • Pivot transform

Answer : Excel file Template table Row_Generation transform

C_BODI_20 SAP Certified Application Associate SAP BO Test Set 2

When is a template table created in the database?


Options are :

  • You right-click and select “Create” on a template table
  • You right-click and select “Import” a template table to a permanent table
  • You create a template table on the Data Flow
  • You execute a Job

Answer : You execute a Job

Which two data integrator objects/operations support load balancing in a server Group based architecture


Options are :

  • Lookup_ext
  • Script
  • Job
  • While loop

Answer : Lookup_ext Job

Which three objects does the Management Condole “Iplact and Lineage analysis” capture?


Options are :

  • Source tables
  • Target files
  • SQL transform tables
  • BusinessObjects universes

Answer : Source tables Target files BusinessObjects universes

C_BODI_20 SAP Certified Application Associate SAP BO Test Set 3

Data integrator conations "Execute only once" logic on which two objects?


Options are :

  • Scripts
  • Data flows
  • Conditionals
  • Work flows

Answer : Data flows Work flows

Some of your incoming data are rejected by the database table because of conversion errors and primary key violations. You want to edit and load the failed data rows manually using the SQL Query tool. How can you perform this action?


Options are :

  • In the target table editor select "use overflow file", select "write SQL"and enter the filename. B. In the job properties select "trace SQL_Errors" and copy the failed SQL command from the job trace log
  • Use the SQL contained in the error log file in the "BusinessObject/data integration/logsÂ…"directory
  • In the data flow properties, select "SQL Exception file" and enter the filename

Answer : In the target table editor select "use overflow file", select "write SQL"and enter the filename. B. In the job properties select "trace SQL_Errors" and copy the failed SQL command from the job trace log

Which function must you use to call an external program?


Options are :

  • Exec
  • Run
  • System
  • Call

Answer : Exec

C_BODI_20 SAP Certified Application Associate SAP BO Test Set 4

Which two types of database metadata can be imported by browsing?


Options are :

  • Function
  • Trigger
  • View
  • Stored procedure
  • Table

Answer : View Table

What is the function of the Case transform


Options are :

  • To map a column value based on multiple conditions.
  • To join data sets from separate streams based on conditional logic
  • To select a Job path based on conditional logic.
  • To split data sets based on conditional logic into separate streams

Answer : To split data sets based on conditional logic into separate streams

Which two Data Integrator objects are reusable


Options are :

  • Flat File Format
  • Script
  • While Loop
  • Work Flow

Answer : Flat File Format Work Flow

C_BODI_20 SAP Certified Application Associate SAP BO Test Set 5

Which three applications can you use to schedule Data Integrator batch jobs


Options are :

  • Third party scheduling applications
  • Data Integrator Designer
  • Data Integrator Scheduler
  • BusinessObjects Enterprise Scheduler

Answer : Third party scheduling applications Data Integrator Scheduler BusinessObjects Enterprise Scheduler

Which two steps are part of the profiling configuration process?


Options are :

  • . Use Metadata Reports to review the profiler results.
  • Use the Web Administrator to submit a column profiler job on a particular table.
  • Use the Designer to log into the profiler repository
  • Use the Web Administrator to associate the profiler repository with the Web Administrator

Answer : Use the Designer to log into the profiler repository Use the Web Administrator to associate the profiler repository with the Web Administrator

You create a two stage process for transferring data from a source system to a target data warehouse via a staging area. The job you create runs both processes in an overnight schedule. The job fails at the point of transferring the data from the staging area to the target data warehouse. During the work day you want to return the job without impacting the source system and therefore want to just run the second stage of the process to transfer the data from the staging area to the data warehouse. How would you design this job?


Options are :

  • Create one data flow which extracts from the source system and populates both the staging area and the target data warehouse
  • Create two data flows the first extracting the data from the source system the second transferring the data to the target data warehouse
  • . Create one data flow which extracts the data form the source system and uses a data_transfer transform to stage the data in the staging area before then continuing to transfer the data to the target data warehouse
  • Create two data flows the first extracting the data from the source system and uses a data_tranfer transform to write the data to the staging area. The second data flow extracts the data from the staging area and transfers it to the target data warehouse.

Answer : Create two data flows the first extracting the data from the source system the second transferring the data to the target data warehouse

C_BOE_30 SAP BO Enterprise Certified Application Associate Set 1

You want to join a "sales", "customer" and "product" table. Each table resides on a different data store and will not pushdown to one SQL command. The "sales" table contains approximately five millions rows. The "customer" table contains approximately five thousand rows. The "product" table contains fifty records.

How would you set the source table options to maximize performance of this operation?


Options are :

  • Set the sales table joins rank to 20 and cache to "No". Set the customer table joins rank to 20 and cache to "yes". Then set the product table join rank to 10 and cache to "yes".
  • Set the sales table joins rank to 30 and cache to "No". Set the customer table joins rank to 20 and cache to "yes". Then set the product table join rank to 10 and cache to "yes".
  • Set the sales table joins rank to 10 and cache to "No". Set the customer table joins rank to 20 and cache to "yes". Then set the product table join rank to 30 and cache to "yes".
  • Set the sales table joins rank to 20 and cache to "No". Set the customer table joins rank to 10 and cache to "yes". Then set the product table join rank to 10 and cache to "yes".

Answer : Set the sales table joins rank to 30 and cache to "No". Set the customer table joins rank to 20 and cache to "yes". Then set the product table join rank to 10 and cache to "yes".

Which two object must you use to create a valid real_time job


Options are :

  • Data flow that contains an XML-Source file and has the "Make Port" option selected
  • Data flow that contains an XML-Source-message.
  • Data flow that contains an XML-Target file and has the "Make Port" option selected.
  • Data flow that contains an XML-Target-message

Answer : Data flow that contains an XML-Source-message. Data flow that contains an XML-Target-message

A global variable is set to restrict the number of rows being returned by the Query transform. Which two methods can you use to ensure the value of the variable is set correctly


Options are :

  • Use the debugger to view the variable value being set
  • View the job monitor log for the variable value
  • Place the data-flow in a try catch block
  • Add the variable to a script inside a print statement

Answer : Use the debugger to view the variable value being set Add the variable to a script inside a print statement

C_BOE_30 SAP BO Enterprise Certified Application Associate Set 2

You are working in a multi-user central repository based environment. You select "Rename owner" on an object which is not checked out. The object has one or more dependent objects in the local repository. What is the outcome?


Options are :

  • Data Integrator renames the individual object owner.
  • Data integrator displays a second window listing the dependent objects. when you click "continue" the object owner is renamed and all of the dependent objects are modified.
  • Data Integrator renames the owner of all 0bjects within the selected data store
  • Data Integrator displays the "This object is checked out from central repository "X". Please select Tools Central Repository. to activate that repository before renaming Message.

Answer : Data integrator displays a second window listing the dependent objects. when you click "continue" the object owner is renamed and all of the dependent objects are modified.

Where do imported stored procedures appear in the Local Object Library


Options are :

  • Datastore Functions
  • External Functions
  • Built-in Functions
  • Custom Functions

Answer : Datastore Functions

How long is the table data within a persistent cache data store retained


Options are :

  • Until the execution of the batch job
  • Until the job server is restarted
  • Until the table is reloaded
  • Until the real-time service is restarted

Answer : Until the table is reloaded

C_BOE_30 SAP BO Enterprise Certified Application Associate Set 3

You want to print the “Employee’s name” string to the trace log. Which expression is correct?


Options are :

  • Print(‘EnployeeÂ’s nameÂ’);
  • Print(“EnployeeÂ’s name”);
  • Print(‘Enployee\Â’s nameÂ’);
  • Print(‘Employee"s nameÂ’);

Answer : Print(‘Enployee\’s name’);

Which two items are included on the Operational Dashboards


Options are :

  • Job Server Resource Utilization History
  • Job Execution Duration History
  • Job Schedule History
  • Job Execution Statistics History

Answer : Job Execution Duration History Job Execution Statistics History

What transform must you use to join two source tables?


Options are :

  • Map_Operation
  • Case
  • Query
  • Table_Comparison

Answer : Query

C_BOE_30 SAP BO Enterprise Certified Application Associate Set 4

What are the three possible row operation codes for output from the Table_ Comparison transform?


Options are :

  • Delete
  • Update
  • Insert
  • Normal
  • Discard

Answer : Delete Update Insert

Your data flow loads the contents of "order_details "and" order_headers "into one XML file that contains a node <HEADER> and a child mode <DETAIL>. How should you populate the structure in your Query?


Options are :

  • In the HEADER schema use the order_headers for from and put order_header.order_id = order details.order_id in where clause. In the detail schema use the order_details for from and leave where empty
  • In the header schema use the order_header for from and leave where empty in the detail schema use the order_header for from and put order_header order_id = order_details order_id in where clause.
  • In the header schema use the order_headers for from, and in where clause put order_header order_id = order details order_id in the detail schema use the order_header, order_details for from and leave where empty.
  • In the HEADER schema use the order headers for from and leave where empty in the detail schema use the order details for From and Where clause put order_header_id =order_details order_id.

Answer : In the HEADER schema use the order headers for from and leave where empty in the detail schema use the order details for From and Where clause put order_header_id =order_details order_id.

You want to join the "sales" and "customer" tables. Both tables reside in different data stores. The "sales" table contains approximately five million rows. The "customer" table Contains approximately five thousand rows, the join occurs in memory. How would you set the source table options to maximize the performance of the operation?


Options are :

  • Set the sales table joins tank to 10 and the cache to "yes" then set the customer table join tank to 5 and cache to "yes".
  • Set the sales table joins tank to 10 and the cache to "No" then set the customer table join tank to 5 and cache to "yes".
  • Set the sales table joins tank to 5 and the cache to "Yes" then set the customer table join tank to 10 and cache to "No
  • Set the sales table joins tank to 5 and the cache to "No" then set the customer table join tank to 10 and cache to "No".

Answer : Set the sales table joins tank to 10 and the cache to "No" then set the customer table join tank to 5 and cache to "yes".

C_BODI_20 SAP Business Objects Data Integrator XI R2 Test Set 1

How do you verify you can launch a batch job?


Options are :

  • Check the Job Server Status icon
  • Check the Profile Server icon
  • Run the Job Server Test Tool
  • Run the Profile Server Test Tool

Answer : Check the Job Server Status icon

You are using an oracle 10G database for the source and target tables in your data flow. In which circumstance will data integrator optimize the SQL to use the oracle "merge" command


Options are :

  • The Map_Operation is used to map all items from " normal " to" update now operations
  • A table comparison is used to compare the source with the target
  • The "Use input Keys" option is selected on the target table editor
  • The "Auto Correct load" option is selected on the Target table

Answer : The "Auto Correct load" option is selected on the Target table

Where is the data stored for a template table?


Options are :

  • Job Server memory during and after Job execution
  • Underlying database table only during execution
  • Job Server memory during Job execution
  • Underlying database table during and after execution

Answer : Underlying database table during and after execution

C_BODI_20 SAP Business Objects Data Integrator XI R2 Test Set 2

Which function must you use to retrieve the current row number of your data set?


Options are :

  • Current_Row
  • Gen_Row_Num
  • Key_Generation
  • . @@row

Answer : Gen_Row_Num

Your data contains errors and some rows fail to load. How do you ensure that valid data continues to be written


Options are :

  • Select “Auto Connect Load"
  • Specify a Post-Load Command
  • Select “Use Overflow File
  • Specify a Pre-Load Command

Answer : Select “Auto Connect Load"

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions