BODS Question Answers


Hello friends in this post we are going to discuss about BODS Multiple choice questions with answer | BODS Question answers Wipro TrendNXT Myskillz | BODS Objective type questions with answer | BODS MCQ with answers

If you are looking for more Dumps for MYSKILLZ Visit Here

1. What is the use of Business Objects Data Services?

Answer:

Business Objects Data Services provides a graphical interface that allows you to easily create jobs that extract data from heterogeneous sources, transform that data to meet the business requirements of your organization, and load the data into a single location.

2. Define Data Services components.

Answer:

Data Services includes the following standard components:

  • Designer
  • Repository
  • Job Server
  • Engines
  • Access Server
  • Adapters
  • Real-time Services
  • Address Server
  • Cleansing Packages, Dictionaries, and Directories
  • Management Console

3. What are the steps included in Data integration process?

Answer:

  • Stage data in an operational data store, data warehouse, or data mart.
  • Update staged data in batch or real-time modes.
  • Create a single environment for developing, testing, and deploying the entire data integration platform.
  • Manage a single metadata repository to capture the relationships between different extraction and access methods and provide integrated lineage and impact analysis.

4. Define the terms Job, Workflow, and Dataflow

Answer:

  • A job is the smallest unit of work that you can schedule independently for execution.
  • A work flow defines the decision-making process for executing data flows.
  • Data flows extract, transform, and load data. Everything having to do with data, including reading sources, transforming data, and loading targets, occurs inside a data flow.

5. Arrange these objects in order by their hierarchy: Dataflow, Job, Project, and Workflow.

Answer

Project, Job, Workflow, Dataflow.

6. What are reusable objects in DataServices?

Answer:

Job, Workflow, Dataflow.

7. What is a transform?

Answer:

A transform enables you to control how datasets change in a dataflow.

8. What is a Script?

Answer:

A script is a single-use object that is used to call functions and assign values in a workflow.

9. What is a real time Job?

Answer:

Real-time jobs “extract” data from the body of the real time message received and from any secondary sources used in the job.

10. What is an Embedded Dataflow?

Answer:

An Embedded Dataflow is a dataflow that is called from inside another dataflow.

11. What is the difference between a data store and a database?

Answer:

A data store is a connection to a database.

12. How many types of data stores are present in Data services?

Answer:

Three.

  • Database Data stores: provide a simple way to import metadata directly from an RDBMS.
  • Application Data stores: let users easily import metadata from most Enterprise Resource Planning (ERP) systems.
  • Adapter Data stores: can provide access to an application’s data and metadata or just metadata.

13. What is the use of Compace repository?

Answer:

Remove redundant and obsolete objects from the repository tables.

14. What are Memory Data stores?

Answer:

Data Services also allows you to create a database data store using Memory as the Database type. Memory Data stores are designed to enhance processing performance of data flows executing in real-time jobs.

15. What are file formats?

Answer:

A file format is a set of properties describing the structure of a flat file (ASCII). File formats describe the metadata structure. File format objects can describe files in:

  • Delimited format — Characters such as commas or tabs separate each field.
  • Fixed width format — The column width is specified by the user.
  • SAP ERP and R/3 format.

16. Which is NOT a datastore type?

Answer:

File Format

17. What is repository? List the types of repositories.

Answer:

The Data Services repository is a set of tables that holds user-created and predefined system objects, source and target metadata, and transformation rules. There are 3 types of repositories.

  • A local repository
  • A central repository
  • A profiler repository

18. What is the difference between a Repository and a Data store?

Answer:

A Repository is a set of tables that hold system objects, source and target metadata, and transformation rules. A Data store is an actual connection to a database that holds data.

19. What is the difference between a Parameter and a Variable?

Answer:

A Parameter is an expression that passes a piece of information to a work flow, data flow or custom function when it is called in a job. A Variable is a symbolic placeholder for values.

20. When would you use a global variable instead of a local variable?

Answer:

  • When the variable will need to be used multiple times within a job.
  • When you want to reduce the development time required for passing values between job components.
  • When you need to create a dependency between job level global variable name and job components.

21. What is Substitution Parameter?

Answer:

The Value that is constant in one environment, but may change when a job is migrated to another environment.

22. List some reasons why a job might fail to execute?

Answer:

Incorrect syntax, Job Server not running, port numbers for Designer and Job Server not matching.

23. List factors you consider when determining whether to run work flows or data flows serially or in parallel?

Answer:

Consider the following:

  • Whether or not the flows are independent of each other
  • Whether or not the server can handle the processing requirements of flows running at the same time (in parallel)

24. What does a lookup function do? How do the different variations of the lookup function differ?

Answer:

All lookup functions return one row for each row in the source. They differ in how they choose which of several matching rows to return. ‘

25. List the three types of input formats accepted by the Address Cleanse transform.

Answer:

Discrete, multiline, and hybrid.

26. Name the transform that you would use to combine incoming data sets to produce a single output data set with the same schema as the input data sets.

Answer:

The Merge transform.

27. What are Adapters?

Answer:

Adapters are additional Java-based programs that can be installed on the job server to provide connectivity to other systems such as Salesforce.com or the JavaMessagingQueue. There is also a Software Development Kit (SDK) to allow customers to create adapters for custom applications.

28. List the data integrator transforms

Answer:

  • Data_Transfer
  • Date_Generation
  • Effective_Date
  • Hierarchy_Flattening
  • History_Preserving
  • Key_Generation
  • Map_CDC_Operation
  • Pivot Reverse Pivot
  • Table_Comparison
  • XML_Pipeline

29. List the Data Quality Transforms

Answer:

  • Global_Address_Cleanse
  • Data_Cleanse
  • Match
  • Associate
  • Country_id
  • USA_Regulatory_Address_Cleanse

30. What are Cleansing Packages?

Answer:

These are packages that enhance the ability of Data Cleanse to accurately process various forms of global data by including language-specific reference data and parsing rules.

31. What is Data Cleanse?


Answer:

The Data Cleanse transform identifies and isolates specific parts of mixed data, and standardizes your data based on information stored in the parsing dictionary, business rules defined in the rule file, and expressions defined in the pattern file.

32. What is the difference between Dictionary and Directory?

Answer:

Directories provide information on addresses from postal authorities. Dictionary files are used to identify, parse, and standardize data such as names, titles, and firm data.

33. Give some examples of how data can be enhanced through the data cleanse transform, and describe the benefit of those enhancements.

Answer:

  • Enhancement Benefit
  • Determine gender distributions and target
  • Gender Codes marketing campaigns
  • Provide fields for improving matching
  • Match Standards results

34. A project requires the parsing of names into given and family, validating address information, and finding duplicates across several systems. Name the transforms needed and the task they will perform.

Answer:

  • Data Cleanse: Parse names into given and family.
  • Address Cleanse: Validate address information.
  • Match: Find duplicates.

35. Describe when to use the USA Regulatory and Global Address Cleanse transforms.

Answer:

Use the USA Regulatory transform if USPS certification and/or additional options such as DPV and Geocode are required. Global Address Cleanse should be utilized when processing multi-country data.

36. Give two examples of how the Data Cleanse transform can enhance (append) data.

Answer:

The Data Cleanse transform can generate name match standards and greetings. It can also assign gender codes and prenames such as Mr. and Mrs.

37. What are name match standards and how are they used?

Answer:

Name match standards illustrate the multiple ways a name can be represented. They are used in the match process to greatly increase match results.

38. What are the different strategies you can use to avoid duplicate rows of data when re-loading a job.

Answer:

  • Using the auto-correct load option in the target table.
  • Including the Table Comparison transform in the data flow.
  • Designing the data flow to completely replace the target table during each execution.
  • Including a preload SQL statement to execute before the table loads.

39. What is the use of Auto Correct Load?

Answer:

It does not allow duplicated data entering into the target table.It works like Type 1 Insert else Update the rows based on Non-matching and matching data respectively.

40. What is the use of Array fetch size?

Answer:

Array fetch size indicates the number of rows retrieved in a single request to a source database. The default value is 1000. Higher numbers reduce requests, lowering network traffic, and possibly improve performance. The maximum value is 5000

41. What are the difference between Row-by-row select and Cached comparison table and sorted input in Table Comparison Tranform?

Answer:

  • Row-by-row select —look up the target table using SQL every time it receives an input row. This option is best if the target table is large.
  • Cached comparison table — To load the comparison table into memory. This option is best when the table fits into memory and you are comparing the entire target table
  • Sorted input — To read the comparison table in the order of the primary key column(s) using sequential read.This option improves performance because Data Integrator reads the comparison table only once.Add a query between the source and the Table_Comparison transform. Then, from the query’s input schema, drag the primary key columns into the Order By box of the query.

42. What is the use of using Number of loaders in Target Table?

Answer:

Number of loaders loading with one loader is known as Single loader Loading. Loading when the number of loaders is greater than one is known as Parallel Loading. The default number of loaders is 1. The maximum number of loaders is 5.

43. What is the use of Rows per commit?

Answer:

Specifies the transaction size in number of rows. If set to 1000, Data Integrator sends a commit to the underlying database every 1000 rows.

44. What is the difference between lookup (), lookup_ext () and lookup_seq ()?

Answer:

  • lookup() : Briefly, It returns single value based on single condition
  • lookup_ext(): It returns multiple values based on single/multiple condition(s)
  • lookup_seq(): It returns multiple values based on sequence number

45. What is the use of History preserving transform?

Answer:

The History_Preserving transform allows you to produce a new row in your target rather than updating an existing row. You can indicate in which columns the transform identifies changes to be preserved. If the value of certain columns change, this transform creates a new row for each row flagged as UPDATE in the input data set.

46. What is the use of Map-Operation Transfrom?

Answer:

The Map_Operation transform allows you to change operation codes on data sets to produce the desired output. Operation codes: INSERT UPDATE, DELETE, NORMAL, or DISCARD.

47. What is Heirarchy Flatenning?

Answer:

Constructs a complete hierarchy from parent/child relationships, and then produces a description of the hierarchy in vertically or horizontally flattened format.

  • Parent Column, Child Column
  • Parent Attributes, Child Attributes.

48. What is the use of Case Transform?

Answer:

Use the Case transform to simplify branch logic in data flows by consolidating case or decision-making logic into one transform. The transform allows you to split a data set into smaller sets based on logical branches.

49. What must you define in order to audit a data flow?

Answer:

You must define audit points and audit rules when you want to audit a data flow.

50. List some factors for PERFORMANCE TUNING in data services?

Answer:

The following sections describe ways you can adjust Data Integrator performance

  • Source-based performance options
  • Using array fetch size
  • Caching data
  • Join ordering
  • Minimizing extracted data
  • Target-based performance options
  • Loading method and rows per commit
  • Staging tables to speed up auto-correct loads
  • Job design performance options
  • Improving throughput
  • Maximizing the number of pushed-down operations
  • Minimizing data type conversion
  • Minimizing locale conversion
  • Improving Informix repository performance
  1. A project is
  2. Single-Usable object
  3. The highest level
  4. Listed in the object library
  5. All of the above

Ans: d

  • Use embedded dataflow to
  • Simplify dataflow display
  • Reuse dataflow logic
  • Debug dataflow logic
  • All of the above

Ans: d

  • Type of variable & parameter one can create using variable ¶meter window one selects dataflow
  • Local variables
  • Global variables
  • Parameters
  • All of the above

Ans: d

  • What is true about Global variables
  • Can’t be shared across jobs
  • Can be shared across jobs

Ans: a

  • What is not true about Merge transform
  • All the Input data sets must have same structure and data types and lengths.
  • No.of Input and output data sets  should match
  • No.of Input and output data sets  should not match
  • Input data set names should match

Ans: c

  • Which transform will perform IF/THEN/ELSE ?
  • Conditional
  • Script
  • Merge
  • Query

Ans: a

  • What is single use object in bods
  • Job
  • Workflow
  • Dataflow
  • Project

Ans: d

  • Which is not a data store?
  • File format
  • Data base Data store
  • Application Data store
  • Adapter Data store

Ans: a

  • What is not true about array fetch size
  • Default array fetch size is 1000         ( both options are right)
  • Maximum array fetch size is 5000
  1. Project contains
  2. Job
  3. Workflow
  4. Dataflow
  5. All

Ans: d

  1. Reasons why a job might fail to execute
  2. Incorrect syntax, Job Server not running, port numbers for Designer and Job Server not matching.
  3. Incorrect syntax, Job Server running, port numbers for Designer and Job Server not matching.
  4. Incorrect syntax, Job Server not running, port numbers for Designer and Job Server  matching.
  5. correct syntax, Job Server not running, port numbers for Designer and Job Server not matching.

Ans: a

  1. Pivot transform

Ans: converts columns to rows

  1. Difference viewer will compare

Ans: Two objects

  1. Objects hierarchy

Ans: Project, Job, Workflow, dataflow

  1. Reusable objects
  2.  job, workflow, dataflow
  3. job, workflow, dataflow, scripts
  4. job, workflow, dataflow, conditional
  5. Job, Workflow, Dataflow, Try catch

Ans: a

  1. which transform will act as if then else

ans: conditional

  1. which action will not be perform with in case transform
  2. update
  3. normal
  4. delete
  5. drop

ans: d

  1. Which is the correct sequence of transforms to populate a Type2 Slowly Changing Dimension(SCD2)?
  2. Key_Generation, Table_comparision, History_Preserving
  3. History_Preserving, Table_comparision, Key_Generation
  4. Table_comparision, History_Preserving, Key_Generation
  5. Table_comparision, Key_Generation, History_Preserving

Ans: c

  1. Which is the executable object in BODS

Ans: Job

  • Name the transform that you would use to combine incoming datasets to produce a single output dataset with same schema as the input dataset?
  • Conditional
  • Script
  • Merge
  • Map_operation

Ans: c

  • Which is not a datastore type
  • File format
  • Siebel
  • Peoplesoft
  • JDE world

Ans: a

  • Which function retrieves GHI from ABC_DEF_GHI_123

Ans: Word_ext(ABC_DEF_GHI_123,3,_)

  • Fuction which return multiple values based on one or more conditions

ans: lookup_ext()

  • Which is not a datastore type
  • flatfile
  • Siebel
  • Datastore object
  • JDE world

Ans: a

  • Which is an executable object?
  • Job
  • Workflow
  • Dataflow
  • Query

Ans: a

  • Temp table

Ans: When we execute a job it is automatically creates the structure of table in database

  • How to use an existing dataflow in current job?

Ans: Right click+ replicate in workflow workspace

  • Datastore is
  • Sum_row()
  • Aggregate_row()
  • Count_row()

An Embedded Dataflow is a dataflow that is called from inside another dataflow. Data passes into or out of the embedded dataflow from the parent flow through a single source or target. The embedded dataflow can contain any number of sources or targets, but only one input or one output can pass data to or from the parent data flow

  •  Embedded Data flow

a. can have any no of targets and sources but only one to I/p or O/p and from a parent table

b. Rest of the 3 options are Similar vice versa options

Ans: Look at the above bolded paragraph

  • A Data store is an actual connection to a database that holds data.
  • The Value that is constant in one environment, but may change when a job is migrated to another environment.

Ans: Substitution Parameter

  • Which of the following transform is used to ‘push down’ to the data source.
    • Hierarchy Flattening
    • Data_Transfer
    • XML_Pipeline
    • Map_CDC_Operation
  • Which of the following is not true about ******* cache table.
    • It can be used as a target table

b. It can be used as a look up table           (Answer is either option 2 or 3)

c. It cannot be used as a look up table

  • Which of the following is not a Job design performance tuning operation
    • Maximizing locale conversion
    • Minimizing data type conversion
  • Which one of the following is  created using Variables and parameters in the Data Flow
    • Global Variables
    • Local Variables.
    • Parameters.
    • Functions.
  • Which of the following object is used in Work flow
    • Validation Transform
    • Map transform
    • Work Flow
    • Query Transform
  • Which of the following is used to execute a function
    • exec
    • invoke
    • call
  • Which script is used in the smart editor
    • Java Script
    • Business objects data services scripting language
  • Local Repository and central repository should have
    • Same version number
    • Bla bla..

Read about local repository and central repository in google.

  • Difference transform can compare except
    • Same objects of different version number
    • Different tables
    • Different objects
  • W
    • Call invoke
    • Call merge
  • Following transform is not used in query level
    • Sql
    • Validation
    • Xml_map
    • Merge


Leave a Reply

Your email address will not be published. Required fields are marked *