Friday 13 January 2012

ALE


ALE provides a complete programming model for implementing BAPIs. ALE supports these method calls:

• Synchronous method calls

Synchronous method calls can also be used in ALE distribution scenarios. These method calls are either BAPIs or Dialog Methods [Seite 191].
In ALE Customizing you can assign the RFC destinations to be used for a synchronous method call.

• Asynchronous method calls

If BAPIs are called asynchronously, ALE error handling and ALE audit can be used.
If an asynchronous BAPI call is to be used for the distribution, the BAPI-ALE interface required for inbound and outbound processing can be automatically generated. Developing an ALE business process in ABAP is restricted to the programming of the BAPI.

An object-oriented approach has the following advantages:
- The application only has to maintain one interface
- The automatic generation of the BAPI-ALE interface avoids programming errors.

Process Flow

If you are not enhancing an SAP BAPI and you are not creating your own BAPI when you are implementing an ALE business process, you can simply follow the steps below:
• Filtering Data
• Determining the BAPI Receivers

If, on the other hand, you want to enhance a BAPI or create your own, you have to follow these steps:

• Implementing Your Own BAPIs
• Maintaining the BAPI-ALE Interface
• Determining the BAPI Receivers Application programs must call a function module for the receiver determination and a generated application function module in the BAPI-ALE interface.

You can verify the quality of the ALE layer and ALE business processes using Automatic Tests


BAPIs can be called by applications synchronously or asynchronously. ALE functions such as BAPI maintenance in the distribution model and receiver determination can be used for both types of call.

Note that synchronously-called BAPIs are only used for reading external data to avoid database inconsistencies arising from communication errors.

The application synchronously calls a BAPI in the external system to create an FI document. The document is correctly created but the network crashes whilst the BAPI is being executed. An error message is returned to the application and the FI document is created again.

The document has been duplicated in the system called.
An application program can implement a two-phase commit by thoroughly checking the data in the external system.
An easier solution is to call the BAPI asynchronously, as Error Handling
assures that the data remains consistent.
A BAPI should be implemented as an asynchronous interface, if one of the criteria below applies:

• Consistent database changes in both systems

Data must be updated in the local system as well as on a remote system
• Loose coupling

An asynchronous interface would represent too narrow a coupling between the client and the server systems. If the connection fails the client system can no longer function correctly.

• Performance load

The interface is used often or it handles large volumes of data. A synchronous interface cannot be used in this situation because performance would be too low.
If you want to implement a BAPI as an asynchronous interface, you have to generate a BAPI-ALE interface for an existing BAPI.


The processes in the application layer and the ALE layer are completed on both the inbound and outbound processing sides. The communication layer transfers the data by transactional Remote Function Call (tRFC) or by EDI file interface.

The process can be divided into the following sub-processes:

1. Outbound Processing
• Receiver determination
• Calling the generated outbound function module
• Conversion of BAPI call into IDoc
• Segment filtering
• Field conversion
• IDoc version change
• Dispatch control

2. IDoc dispatch

IDocs are sent in the communication layer by transactional Remote Function Call (tRFC) or by other file interfaces (for example, EDI).
tRFC guarantees that the data is transferred once only.

3. Inbound Processing

• Segment filtering
• Field conversion
• Transfer control
• Conversion of IDoc into BAPI call
• BAPI function module call
• Determination of IDoc status


• Posting of application data and IDoc status
• Error handling

The sub-processes in inbound and outbound processing are described below:
Outbound Processing
On the outbound side first of all the receiver is determined from the distribution model.

Then the outbound function module that has been generated from a BAPI as part of the BAPI-ALE interface is called in the application layer (see also Example Programs with Asynchronous BAPI Calls [Seite 42]).

In the ALE layer the associated IDoc is filled with the filtered data from the BAPI call.
The volume of data and time of the data transfer is controlled by the dispatch control.
The outbound processing consists of the following steps:
Receiver determination
The receivers of a BAPI call are defined in the distribution model in same way as with synchronous BAPI calls.
Before the BAPI or generated BAPI-ALE interface can be called, the receiver must be determined.

When the receiver is determined, the filter objects are checked against the specified conditions and the valid receivers are reported back.
If the distribution of the data is also dependent on conditions, these dependencies between BAPIs or between BAPIs and message types are defined as receiver filters.

For each of these receiver filters, before the distribution model is defined, a filter object is created whose value at runtimes determines whether the condition is satisfied or not.

For more information see Determining Receivers of BAPIs [Seite 35].
Calling the generated outbound function module
If the receivers have been determined, you have to differentiate between local and remote receivers.

The BAPI can be called directly for local receivers. For remote calls the generated ALE outbound function module must be executed so that processing is passed to the ALE layer. The data for the BAPI call and the list of allowed logical receiver systems are passed to this function module.


Programming Notes:

After calling the generated function module the application program must contain the command COMMIT WORK.

The standard database COMMIT at the end of the transaction is not sufficient. The COMMIT WORK must not be executed immediately after the call, it can be executed at higher call levels after the function module has been called several times.
The IDocs created are locked until the transaction has been completed. To unlock them earlier, you can call the following function modules

:
DEQUEUE_ALL releases all locked objects
EDI_DOCUMENT_DEQUEUE_LATER releases individual IDocs whose numbers are transferred to the function module as parameter values.


Data Filtering

Two filtering services can be used - parameter filtering with conditions and unconditional interface reduction.

• Posting of application data and IDoc status
• Error handling
• f entire parameters have been deactivated for the interface reduction, they are not included in the IDoc. If, on the other hand, only individual fields are not to be included for structured parameters, the entire parameters are still included in the IDoc.

• With parameter filtering, table rows that have been filtered out are not included in the IDoc. For more information see Filtering Data [Extern]. Conversion of BAPI call into IDoc Once the data has been filtered, an IDoc containing the data to be transferred, is created from the BAPI call by the outbound function module Segment filtering Once the IDoc has been created, IDoc segments can be filtered again. This filtering is rarely used for BAPIs.

For example, the field plant can be converted from a two character field to a four character field.
Standard Executive Information System (EIS) tools are used to convert fields. IDoc version change To guarantee that ALE works correctly between different releases of the R/3 System, IDoc formats can be converted to modify message types to suit different release statuses.

If version change has been completed, the IDocs are stored in the database and the dispatch control is started which decides which of these IDocs are sent immediately. SAP uses the following rules to convert existing message types:
• Fields can be appended to a segment type

• New segments can be added ALE Customizing records the version of each message type used in each receiver. The IDoc is created in the correct version in outbound processing. Dispatch control Scheduling the dispatch time: • IDocs can either be sent immediately or in the background.

This setting is made in the partner profile. • If the IDoc is sent in the background, a job has to be scheduled. You can choose how often background jobs are scheduled.


• f entire parameters have been deactivated for the interface reduction, they are not included in the IDoc. If, on the other hand, only individual fields are not to be included for structured parameters, the entire parameters are still included in the IDoc.

• With parameter filtering, table rows that have been filtered out are not included in the IDoc.

For more information see Filtering Data [Extern]. Conversion of BAPI call into IDoc Once the data has been filtered, an IDoc containing the data to be transferred, is created from the BAPI call by the outbound function module Segment filtering Once the IDoc has been created, IDoc segments can be filtered again. This filtering is rarely used for BAPIs.

These are important for converting data fields to exchange information between R/2 and R/3 Systems. For example, the field plant can be converted from a two character field to a four character field.

Standard Executive Information System (EIS) tools are used to convert fields. IDoc version change To guarantee that ALE works correctly between different releases of the R/3 System, IDoc formats can be converted to modify message types to suit different release statuses.

If version change has been completed, the IDocs are stored in the database and the dispatch control is started which decides which of these IDocs are sent immediately. SAP uses the following rules to convert existing message types: • Fields can be appended to a segment type

• New segments can be added ALE Customizing records the version of each message type used in each receiver. The IDoc is created in the correct version in outbound processing. Dispatch control Scheduling the dispatch time:

• IDocs can either be sent immediately or in the background. This setting is made in the partner profile.

• If the IDoc is sent in the background, a job has to be scheduled. You can choose how often background jobs are scheduled.


Distribution Using BAPIs

Controlling the amount of data sent:

• IDocs can be dispatched in packets. The packet size is assigned in ALE Customizing in accordance with the partner profile.

Basis Application Link Enabling Modeling and Implementing Business Processes Partner Profiles and Time of Processing Maintain Partner Profile Manually or: Generate Partner Profiles This setting is only effective if you process the IDocs in the background.


Posting of application data and IDoc status If each IDoc or BAPI is processed individually, the data is written immediately to the database. If several IDocs are processed within one packet, the following may happen:

• The application data of the successfully completed BAPI together with all the IDoc status records is updated, provided that no BAPI call has been terminated within the packet.

• As soon as a BAPI call is terminated within the packet, the status of the associated IDoc will indicate an error. Application data will not be updated. Then inbound processing is run again for all the BAPI calls that had been completed successfully.

Provided that there is no termination during this run, the application data of BAPIs and all the IDoc status records are updated. This process is repeated if there are further terminations. Note: Packet processing is only carried out if there is no serialization.

Error handling You can use SAP Workflow for ALE error handling: • The processing of the IDoc or BAPI data causing the error is terminated. • An event is triggered. This event starts an error task (work item).

• Once the data of the BAPI or IDoc has been successfully updated, an event is triggered that terminates the error task. The work task then disappears from the inbound system.

Implementing Your Own BAPIs
SAP provides a large number of BAPIs. If you want to implement your own BAPIs, you have to use your own namespace.
Procedure


You have the following options:

• You can develop your own BAPI in the customer namespace.
• You can modify a BAPI delivered in the standard system.

1. Copy and modify the function module belonging to the original BAPI.

2. In the Business Object Repository create a subobject type for your BAPI object type in the customer namespace (Tools → Business Framework → BAPI Development → Business Object Builder).

When you create the subobject type the methods of the business object inherit the subtype.

3. Set the status of the object type to Implemented (Edit → Change release status → Object type).

4. You can change and delete the methods of the subtype or enhance them with your own methods.


Notes about Asynchronous BAPIs

If you want to implement an asynchronous ALE business process, you have to Define a BAPI-ALE Interface from the BAPI.
If you implement a BAPI as an asynchronous interface, in addition to following the standard programming BAPI guidelines, keep in mind the following:

• The BAPI must not issue a COMMIT WORK command.
• The method's return parameter must use the reference structure BAPIRET2.
• All BAPI export parameters with the exception of the return parameter are ignored and are not included in the IDoc that is generated.
• Status records log the BAPI return parameter values.


After the function module which converts the IDoc into the corresponding BAPI in the receiving system has been called, status records are written for the IDoc in which messages sent in the return parameter are logged.


If, in at least one of the entries of return parameter, the field Type in the return parameter is filled with A (abort) or E (error), this means:

• Type A:

Status 51 (error, application document not posted) is written for all status records, after a ROLLBACK WORK has been executed.

Type E:

Status 51 (error, application document not posted) is written for all status records and a ROLLBACK WORK is executed.
Otherwise status 53 (application document posted) is written and a COMMIT WORK executed.

Filtering Data
There are two filtering services provided for asynchronous BAPI calls using the BAPI-ALE interface.

Interface Reduction: If you want to reduce the BAPI interface, you do not have to define any filter object types. The BAPI reduction does not have any conditions - it is a projection of the BAPI interface. The developer of the BAPI whose interface is to be reduced must create the BAPI as a reducible using appropriate parameter types. The optional BAPI parameters and/or BAPI fields are deactivated in the distribution model for the data transfer. You can reduce an interface in two ways,

• By fields (using checkbox lists)

• Fully • Parameter Filtering Filter Object Types [Extern] are assigned to the business object method. The valid filter object values must be defined in the distribution model. The BAPI parameter filtering is linked to conditions, it is therefore content-dependent: The lines in table parameters of asynchronous BAPIs are determined depending on the values in the lines (or dependent lines) for the receiver.

Filters are used to define conditions in the form of parameter values that must be satisfied by BAPIs before they can be distributed in ALE outbound processing. The table dataset of a BAPI is determined when the parameters are filtered. Hierarchy relationships between table parameters of the BAPI can also be defined. Distribution by Classes [Extern] is also supported.

For more information see Filtering BAPI Parameters BAPI filtering is the term used for the shared use of both the filter services of the BAPI interface. BAPI filtering is implemented as a service in ALE outbound processing.

Prerequisites for Using Filter Services

The table below lists the prerequisites that the BAPI interface must satisfy, so that ALE filter services can be used. The BAPI can have the following parameter types: Field Reduction Full Filtering Parameter Filtering

1. Unstructured without checkbox
2. Unstructured with checkbox X
3. Single-line structured without checkbox
4. Single-line structured with checkbox X .

5. Multiple-line structured without checkbox
X X

6. Multiple-line structured with checkbox
X X X

7. Multiple-line unstructured without checkbox
X

8. Multiple-line unstructured with checkbox


Note: The fields filled with X satisfy the prerequisites.

Explanation of above table:

1. An unstructured parameter without a checkbox is, for example, a BAPI key field (e.g. the parameter Material in methods of the business object Material). This parameter type cannot be reduced.

2. If there is an unstructured checkbox parameter with the name PX and the data element BAPIUPDATE for an unstructured parameter with the name P, the parameter P is reducible. The parameter is reduced by setting the value of P and of the checkbox parameter PX to EMPTY.

3. A single-line, structured parameter without a checkbox is not reducible.
4. A single-line, structured parameter P with structure S and associated checkbox PX with structure SX can be reduced by fields, provided that:

• S and SX have the same number of fields, which are identical in name and sequence.
• The FUNCTION field and the key fields in S and SX each have the same data element.
• All other fields in SX have the data element BAPIUPDATE.

The FUNCTION field in P and the key fields must be marked as mandatory fields. All the other fields you can chose whether to label them as mandatory. Mandatory fields cannot be reduced. Non-mandatory fields are reduced by setting the field values and the corresponding checkbox to EMPTY.

5. Multiple-line structured parameters (table parameters) without a checkbox cannot be reduced by fields. Parameter filtering and full filtering are possible.

If the hierarchy is maintained and, if dependent tables exist in the hierarchy, records of the dependent tables will also be filtered.
6. A multiple-line structured parameter P with checkbox PX can be reduced by fields, fully filtered or filtered by parameters.

• For field reduction the prerequisites under 4 must be met.
• The checkbox PX must lie directly under P in the hierarchy, with identical key fields, so that the corresponding lines from P and PX are filled, when the parameters are filtered.
• If the hierarchy is maintained and, if dependent tables exist in the hierarchy, records of the dependent tables will also be filtered.

7. A multiple-line, unstructured parameter can only be fully filtered and cannot be used in a hierarchy. Parameter filtering is not allowed.

8. Multiple-line, unstructured parameters with a checkbox cannot be filtered.



Reducing Interfaces Use

The purpose of BAPI and ALE integration is to be able to use ALE business process BAPIs as interfaces.
BAPI reductions are particularly necessary in ALE business processes in which master data is replicated asynchronously:

• Part of the BAPI parameter is not required for the receiving system, even though it is declared when the BAPI is called
• Monitor data transferred into non-SAP systems (non R/3 and/or between business partners) (for example, hide fields).

• Certain data cannot be overwritten in the receiving system. BAPI reductions can however be used everywhere where asynchronous BAPI calls can be used. For asynchronous BAPI calls via the BAPI-ALE interface, only the parameters of the BAPI interface relevant for the receiver should be transferred.

You can set up BAPI reductions in receiver-dependent filtering in the ALE distribution model. You can create templates for making reductions. Material master data is replicated from a reference system to a sales and distribution system.


As only some of the data on the material is required in the sales and distribution system, a reduction of the BAPI interface, Material.SaveReplica, that contains parameters relevant only to sales and distribution, is specified. You can then specify in the distribution model that with Material.SaveReplica only data relevant to sales and distribution is transferred to the sales and distribution system.


You can filters BAPIs (parameter filtering and reduction), when you maintain the distribution model. Reduction and filter information are part of ALE Customizing data in the distribution model. BAPI filtering must be explicitly activated, when the BAPI-ALE interface is generated.

The reduction of the actual (asynchronous) BAPI call is carried out as a service in the ALE layer. The reduction service retrieves the details of the filter settings from the distribution model at runtime. For a receiver or a list of receivers the application development can query the list of parameters to be filled before the BAPI-ALE interface is called.

This keeps the read-access to the database as low as possible. (Alternatively the call can take place and it does not affect the result of the filtering.) You can only set up one BAPI reduction for each sender and receiver pair. Prerequisites

The basic data of the BAPI reduction is maintained by the BAPI developer after the BAPI has been released and before the BAPI-ALE interface is generated.

If a parameter hierarchy is to be used, this has to be specified beforehand. The BAPI developer must create the BAPI as reducible using relevant parameter types. Mandatory parameters and fields must be specified.

The section Filtering Data has a table listing the prerequisites for using filter services.

Fully Reducible Parameters

Only table parameters of BAPIs can be fully reduced. A fully reduced table is an empty table in the receiving system.
To fully reduce a table parameter T1 with a checkbox, the following prerequisites apply:
Table Parameter Structure
T1 Q1
T1X Q1X

T1X is a checkbox parameter.



Reducing Parameter Fields

Fields are reduced by converting the obligatory check fields of a BAPI and initializing the relevant fields in the data parameter. The checkboxes must be assigned to the data parameters following the naming and structure conventions.
The following prerequisites apply for reducing fields of parameter P1:
Table Parameter Structure
P1 S1
P1X S1X

Structures S1 and S1X must have the same number of fields, whereby the names of the fields in both parameters must be identical and in the same order.

If P1 has a FUNCTION field or key fields, the FUNCTION field in S1 and S1X and each of the key fields have the same data element. All other fields of the checkbox use the data element BAPIUPDATE.
Procedure
To reduce BAPIs:
1. Create a reducible BAPI that satisfies the above prerequisites.
2. Before generating the BAPI-ALE interface, you have to activate data filtering (option Data filtering allowed).

You can set up the filtering in the distribution model in Customizing by choosing Distribution (ALE) → Modeling and Implementing Business Processes → Maintain Distribution Model.
Result
The generated BAPI-ALE interface enables BAPIs to be filtered as a service in outbound processing.

To avoid unncessary accesses to the database, the BAPI parameters required for the receivers can be determined before the BAPI-ALE interface is called. This is optional and will not affect the results of the filtering.


Defining and Assigning Filter Object Types

Filter object types

are already assigned to some BAPIs in your applications for the receiver and data filtering. You can also define your own filter object types and assign them to a BAPI or to a parameter of a BAPI.
Process Flow
To define filter object types for BAPIs, follow the steps below: • Define filter object types From the SAP menu choose Tools → ALE → ALE Development → BAPIs. You can create filter object types in Data filtering or Receiver Determination. Then choose Define filter object type (Transaction BD95, table TBD11).

Give the filter object type a name and specify a reference to a table field. The reference to a table field is needed to retrieve the documentation from the data element so that customers can get input help when maintaining the distribution model.


For this reason a foreign key must be maintained for the table field. Use the following conventions to name filter object types: - Release 3.0/3.1: Domain name (example: KOKRS for the controlling area) - Release 4.0: Default field name for the data element (example: COMP_CODE for the company code)


For the required data object check whether a name has already been entered in the domain and default field names of the data element. If the fields are empty, a new filter object must be created. Usually the filter object will also appear in the BAPI interface as a field in the transfer structure, for example, bapiachead-comp_code.


If this is the case create the filter object as follows: ALE object type: Field name in BAPI structure, for example, comp_code table name: Name of BAPI structure, for example, bapiachead field name: Field name in the BAPI structure, for example, comp_code

• Defining filter object types to a BAPI From the SAP menu choose Tools → ALE → ALE Development → BAPI Interface. You can assign filter object types to a BAPI in Data filtering or Receiver Determination.

The filter object types allowed for an object method are maintained in each view of Table TBD16. − Receiver Determination. Choose Assign filter object type to BAPI. You can maintain the entries: Object type (from table TOJTB), Method, Filter object type (from table TBD11) Keep in mind that for receiver determination you have to implement a business add-in f to determine values for the filter object type you have defined .

Defining and Assigning Filter Object Types

Choose Assign filter object type to parameter. You can maintain the entries:
Object type (from Table TOJTB), Method, Filter object type (from Table TBD11) Parameter Field name
Enter the required data to assign a filter object to an object method for the receiver determination or parameter filtering.


Filtering BAPI Parameters
Use
Parameter filtering enables you to manage the number of datasets to be replicated in the BAPI interface using filter objects in the ALE distribution model. The parameters filtered are BAPI table parameters. The lines in the BAPI parameter, that do not match the distribution specifications are filtered out. The filtered table lines are not replicated.

Example: The logical system Q4VCLNT800 is the BAPI server for the BAPI RetailMaterial.Clone. Through the parameter filtering only the plant data of the material plant 001 is to be replicated in this system.
Prerequisites

The prerequisite for this filtering is that a Filter Object Type is assigned to the relevant BAPI in your SAP applications. For some BAPIs SAP has already defined and assigned filter object types.
You can also define your own filter object types and assign them to a BAPI (Defining Filter Object Types and Assigning Them to a BAPI .
You have to define the valid filter object values in the distribution model. For more information see the R/3 Implementation Guide under Distribution (ALE) → Modelling and Implementing Business Processes → Maintain Distribution Model.

Presently, BAPI parameters can only be filtered for distributing master data via BAPIs called asynchronously. For this reason the required ALE Customizing for parameter filtering is only allowed for asynchronous BAPIs with an ALE IDoc interface.

Parameter filtering is allowed for distributing transaction data via asynchronous BAPIs but for the most cases, it has no purpose.
BAPI parameter filtering for asynchronous parameters is always optional. To generate the BAPI-IDoc Interface you must select the Activate checkbox. Otherwise no coding can be generated in the BAPI-IDoc interface.

If a BAPI-IDoc interface has been generated without parameter filtering, you can specify no parameter filtering in ALE Customizing afterwards.
Features
Parameters are filtered dynamically at runtime using the current data in the BAPI table parameters and the distribution conditions specified in the ALE distribution model.

• Reads the specified parameter filter objects in the distribution model
• Reads the interface definition of the BAPI
• Reads the table field values of a table entry for the associated filter objects
• Compares the distribution conditions with the filter objects read out and determines the value of the logical expression
• Deletes table entry
• Examines hierarchy-dependent BAPI parameters and, if applicable, deletes dependent table entries
• For synchronous BAPIs: Calls associated function module and forwards the filtered parameters
• For asynchronous BAPIs: Calls the generated BAPI-ALE interface and forwards the filtered parameters.

Filtering BAPI Parameters

The BAPI parameter filtering can also take into account a hierarchical dependency between BAPI table parameters.

You must specify any hierarchical dependencies before you generate the BAPI-ALE interface of the BAPI.
The specified hierarchy is evaluated when the interface is generated and incorporated in the interface coding.
The BAPI-ALE interface must be regenerated following all subsequent changes made to the hierarchy.
Once the generated IDoc type has been released, the specified hierarchy of the asynchronous BAPI cannot subsequently be changed because of compatibility problems.

Defining Hierarchies Between BAPI Parameters
Use
If you are developing your own ALE business processes, you may have to define dependencies between BAPI table parameters with regard to filtering parameters for data selection.

These dependencies are defined by the field references between the table parameters of BAPIs.
You can filter parameters to determine the dataset and to define dependencies only for the distribution of master data via BAPIs that are called synchronously.

A BAPI for material master data contains the tables for plant data and associated storage data.
The table containing plant data has a reference to the table containing storage data via the key field PLANTS.
There is a hierarchical dependency between the plant and storage data.
If plant 001 of a material is not to be replicated due to parameter filtering, then none of the storage data for plant 001 will be replicated.
Prerequisites
You use BAPI parameter filtering to manage the size of the dataset in the BAPI interface.

Procedure

You can define these hierarchical dependencies in ALE Development under BAPI → Maintain hierarchy of table parameters.
Enter the object type and the method of the BAPI. You can display existing BOR object types and their associated methods using the input help (F4).
The following processing options are available under the menu Hierarchy :
• Create
• Change
• Display
• Delete


Create Hierarchy

This checks whether a hierarchy for the BAPI already exists. Then it checks whether an ALE IDoc interface has already been generated and whether the associated IDoc has been released.
If the IDoc has already been released, then the generated interface has already been delivered to customers and no hierarchy can be created or changed for an existing BAPI because of compatibility problems.
In this case you have to create a new BAPI. A corresponding error message is displayed. If the ALE interface already exists, but the IDoc has not yet been released, then the system will inform you that it needs to be regenerated.
A hierarchy tree is displayed on the next screen. For details see Editing the Hierarchy Display further below.

Change Hierarchy

The same checks are made as when you create a hierarchy. On the next screen the same processing options are provided as when you create a hierarchy.

Display Hierarchy

The same checks are made as when you create a hierarchy. On the next screen you cannot make any changes to the hierarchy.



To display the field references between the tables, double-click on the parent table. The parent table is automatically copied to the next dialog box.
Select one of the child tables from the input help. Select Field references to display the field references.
Delete Hierarchy
The same checks are made as when you create a hierarchy. Once you have confirmed you want to delete the BAPI hierarchy, it is deleted.
Editing the Hierarchy Display
The root node in the hierarchy display corresponds to the function module of the BAPI. The root node is used only for display and is not saved. Also, it cannot be changed. You can edit the hierarchy display as follows:
• Insert table parameters
• Delete table parameters
• Define field references between parent and child tables
• Save hierarchy Parent tables inserted directly under the root node that do not have child tables are not saved.
If only this type of table is created, there is no hierarchy and therefore a hierarchy cannot be saved.
Insert table parameters
Place the cursor on a hierarchy node and choose Edit → Insert table parameters. If you place the cursor on a root node, you can select a parent table of the highest level via the input help. If a table exists above the marked node, this is copied to the next dialog box and you can add a child table to this table.
In principle a table can only exist once in the hierarchy. You can display the available tables via the input help.
In the dialog box you can display the common fields of the parent and child of the table in which the field references can be defined by selecting Field references. You can mark the fields for which a field reference is to be defined.
If no field references between the two tables exist, an error message is displayed. Delete table parameters To delete a table, place the cursor on the relevant node of the hierarchy with the table names of the child table.
Confirm the deletion. All other child tables of the deleted table will also be deleted.
Define field references between parent and child tables
Place the cursor on the node of a child table and choose Edit → Table parameters → Change field references or select the pushbutton Field reference with the change icon. The next dialog box contains the parent parameters and provided that a reference exists, the child parameters too.
When you create table parameter, you can select the associated child table via the input help. You can display common fields by selecting Field references.
Only field references are displayed that have the same names in the parent and child tables. You can define the references between the fields by marking the appropriate references. Field references already defined are marked already.


Save hierarchy
To save a hierarchy, choose Hierarchy → Save.
A transport request is generated to send the associated Customizing table to the correction and transport system.
The hierarchy is not saved if an error occurs when accessing the database. A corresponding error message is displayed.


Maintaining BAPI-ALE Interfaces

The standard R/3 System contains a large quantity of business objects and BAPIs. These include BAPI-ALE interfaces that are generated from BAPIs and enable asynchronous BAPI calls in ALE business processes.
You can develop your own BAPIs in the customer namespace and generate the associated BAPI-ALE interface. The following objects are generated for a BAPI: • Message type • IDoc type including segments
• Function module called in the outbound processing side. (It creates and sends the IDoc from the BAPI data).
• Function module that calls the BAPI with the IDoc data on the inbound processing side The difference to manually maintained message types is that the function module that processes the change pointers does not create an IDoc.
Instead it fills the corresponding BAPI structures, determines the receivers and calls the generated ALE function module. The message types, IDoc types and function modules that have been generated can also be used to distribute master data using the SMD tool.

Prerequisites

The essential prerequisite is that a BAPI exists:
• You have developed your own BAPI in the customer namespace. • You have modified a BAPI from the standard system.
The BAPI-ALE interface is then created in the customer namespace for the new sub-type and a method assigned to it.
Regardless of whether SAP delivers a BAPI-ALE interface for a BAPI with the new Release, any interface you have generated will continue to function in the same as in the earlier Release.
You can regenerate the old interface to adapt newly added parameters, provided that SAP has not delivered a new interface in the new Release.
If SAP delivers a BAPI-ALE interface for a BAPI for which you have already generated an interface, you should use the new interface and delete the interface you generated. You can still use the old interface in the earlier Release.
If you regenerate the old interface, some generated objects, such as segments of SAP objects could be overwritten, if the interface in your BAPI function module references BAPI structures but belongs to SAP.
If you want to take into account hierarchical dependencies between BAPI table parameters, then another prerequisite is that you define the hierarchy before generating the BAPI-ALE interface

The specified hierarchy is evaluated when the interface is generated and incorporated in the interface coding.
The BAPI-ALE interface must be regenerated following all subsequent changes made to the hierarchy.
Once the generated IDoc type has been released, the specified hierarchy of the asynchronous BAPI cannot subsequently be changed because of compatibility problems.

ALE is an R/3 technology for distribution of data between independent R/3 nstallations. ALE is an application which is built on top of the IDoc engine. It simply adds some structured way to give R/3 a methodical means to find sender, receiver, and triggering events for distribution data.

Make Use of ALE for Your Developments :

Transfer master data for material, customer, supplier and more to a different client or system with BALE Copy your settings for the R/3 classification and variant configurator to another system, also in BALE Copy pricing conditions with ALE from the conditions overview screen (e.g. VV12 ).

Distribution Scenario Based on IDocs:

ALE has become very famous in business circles. While it sounds mysterious and like a genial solution, it is simply a means to automate data exchange between SAP systems. It is mainly meant to distribute data from one SAP system to the next. ALE is a mere enhancement of SAPEDI and SAP-RFC technology.

ALE is an SAP designed concept to automatically distribute and replicate data between webbed and mutually trusting systems .

EXPLANATION :

Imagine your company has several sister companies in different countries. Each
company uses its own local SAP installation. When one company creates master
data, e.g., material or customer master, it is very likely that these data should be
known to all associates. ALE allows you to immediately trigger an IDoc sent to all
associates as soon as the master record is created in one system.

Another common scenario is that a company uses different installations for company
accounting and production and sales. In that case, ALE allows you to copy the
invoices created in SD immediately to the accounting installation.

ALE defines a set of database entries which are called the ALE scenario. These tables contain the information as to which IDocs shall be automatically replicated to one or more connected R/3-compatible data systems.


To conclude ALE is not a new technology. It is only a handful of customiing settings
and background routines that allow timed and triggered distribution of data to and
from SAP or RFC-compliant systems. ALE is thus a mere enhancement of SAP-EDI and SAP-RFC technology.

Example :

Let as assume that we want to distribute three types of master data objects: the
material master, the creditor master, and the debtor master.

Let us assume that we have four offices. This graphic scenario shows the type of
data exchanged between the offices. Any of these offices operates an its own stand
alone R/3 system. Data is exchanged as IDocs which are sent from the sending
office and received from the receiving office.
ALE DISTRIBUTION SCENARIO

ALE is a simple add-on application based on the IDoc concept of SAP R/3. It consists of a couple of predefined ABAPs which rely on the customisable distribution scenario. These scenarios simply define the IDoc types and the pairs of partners which exchange data.


ALE defines the logic and the triggering events which describe how and when IDocs are exchanged between the systems. If the ALEE engine has determined which data to distribute, it will call an appropriate routine to create an IDoc. The actual istribution is then performed by the IDoc layer.

The predefined distribution ABAPs can be used as templates for own development ALE uses IDocs to transmit data between systems.

ALE is, of course, not restricted to the data types which are already predefined in
the BALE transaction. You can write your ALE distribution handlers which should
only comply with some formal standards, e.g., not bypassing the ALE scenarios.

All ALE distribution uses IDocs to replicate the data to the target system. The ALE
applications check with the distribution scenario and do nothing more than call the
matching IDoc function module, which is alone responsible for gathering the
requested data and bringing them to the required data port. You need to thoroughly
understand the IDoc concept of SAP beforehand, in order to understand ALE.

The process is extremely simple: Every time a data object, which is mentioned in an
ALE scenario changes, an IDoc is triggered from one of the defined triggering
mechanisms. These are usually an ABAP or a technical workflow event.

Distribution ABAPs are started manually or can be set up as a triggered or timed
batch job. Sample ABAPs for ALE distribution are those used for master data
distribution in transaction BALE, like the ones behind the transaction BD10, BD12
etc.

The workflow for ALE is based on change pointers. Change pointers are entries in a
special database entity, which record the creation or modification of a database
object. These change pointers are very much like the SAP change documents. They
are also written from within a change document, i.e. from the function
CHANGEDOCUMENT_CLOSE. The workflow is also triggered from within this
function.

SAP writes those ALE change pointers to circumvent a major draw back of the
change documents. Change documents are only written if a value of a table column
changes, if this column is associated with a data element which is marked as
relevant for change documents (see SE11). ALE change pointers use a customised
table which contains the names of those table fields which are relevant for change
pointers.

USEFUL ALE TRANSACTION CODES

ALE is customised via three main transaction. These are SALE, WEDI and BALE.


This is the core transaction for SALE customizsng. Here you find everything ALE
related which has not already been covered by the other customising transactions.

WEDI - IDoc Administration :

Here you define all the IDoc related parts, which make up most of the work related
to ALE.

BDBG - Automatically generate IDocs From A BAPI :

Good stuff for power developers. It allows you to generate all IDoc definitions
including segments and IDoc types from the DDIC entries for a BAPI definition.

ALE Customizing SALE

All ALE special customiing is done from within the transaction SALE, which links
you to a subset of the SAP IMG.

The scenario defines the IDoc types and the pairs of IDoc partners which participate
in the ALE distribution. The distribution scenario is the reference for all ABAPs and
functionality to determine which data is to be replicated and who could be the
receiving candidates. This step is, of course, mandatory.

The change pointers can be used to trigger the ALE distribution. This is only
necessary if you really want to use that mechanism. You can, however, send out
IDocs every time an application changes data. This does not require the set-up of the
change pointers.

SAP allows the definition of rules, which allow a filtering of data, before they are
stored in the IDoc base. This allows you to selectively accept or decline individual
IDoc segments.

ALE allows the definition of conversion rules. These rules allow the transition of
individual field data according mapping tables. Unfortunately, the use of a function
module to convert the data is not realized in the current R/3 release.

The filter and conversion functionality is only attractive on a first glance. Form
practical experience we can state that they are not really helpful. It takes a long time to set up the rules, and rules usually are not powerful enough to avoid modifications in an individual scenario. Conversion rules tend to remain stable, after they have once been defined. Thus, it is usually easier to call an individual IDoc processing function module, which performs your desired task more flexibly and easily.

Basic settings have to be adjusted before you can start working with ALE.

Before we start, we need to maintain some logical systems. These are names for the
RFC destinations which are used as communication partners. An entry for the
logical system is created in the table TBDLS.

Finally. you will have to assign a logical system to the clients involved in ALE or
IDoc distribution. This is done in table T000, which can be edited via SM31 or via
the respective SALE tree element.

The distribution model (also referred to as ALE-Scenario) is a more or less graphical approach to define the relationship between the participating senders and receivers.
The distribution model is shared among all participating partners. It can, therefore,
only be maintained in one of the systems, which we shall call the leading system.

Only one system can be the leading system, but you can set the leading system to
any of the partners at any time, even if the scenario is already active.
This will be the name under which you will address the scenario. It serves as a
container in which you put all the from-to relations.

You can have many scenarios for eventual different purposes. You may also want to
put everything in a single scenario. As a rule of thumb, it proved as successful that
you create one scenario per administrator. If you have only one ALE administrator,
there is no use having more than one scenario. If you have several departments with
different requirements, then it might be helpful to create one scenario per
department.

The model view displays graphically the from-to relations between logical systems.
You now have to generate the partner profiles which are used to identify the
physical means of data transportation between the partners.

A very useful utility is the automatic generation of partner profiles out of the ALE scenario.

Even if you do not use ALE in your installation, it could be only helpful to define the EDI partners as ALE scenario partners and generate the partner profiles.
If you define the first profile for a partner, you have to create the profile header first.

The partner class is only a classification value. You can give an arbitrary name in order to group the type of partners, e.g. EDI for external ones, ALE for internal ones, and IBM for connection with IBM OS/390 systems.

There is a very powerful utility which allows you to generate most IDoc and ALE interface objects directly from a BAPI’s method interface.

Every time BAPI is executed, the ALE distribution is checked.

For each of the parameters in the BAPI's interface, the generator created a segment for the IDoc type. Some segments are used for IDoc inbound only; others for IDoc outbound instead. Parameter fields that are not structured will be combined in a single segment which is placed as first segment of the IDoc type and contains all these fields. This collection segment receives the name of the IDoc type.

Defining Filter Rules :

ALE allows you to define simple filter and transformation rules. These are table entries which are processed every time the IDoc is handed over to the port. Depending on the assigned path, this happens either on inbound or outbound.

Using the OLE/Active-X functionality of R/3 you can call R/3 from any object aware language.

Actually it must be able to do DLL calls to the RFC libraries of R/3. SAP R/3 scatters the documentation for these facilities in several subdirectories of the SAPGUI installation.

R/3 can exchange its IDoc by calling a program that resides on the server
The programs can be written in any language that supports OLE-2/Active-X technology
Programming skills are mainly required on the PC side, e.g. you need to know Delphi, JavaScript or Visual Basic well .

ALE

Reasons for Distributing Business Functions

In a modern company, the flows of logistics and information between the various organizational units are likely to be sizable and complex. One reason for this is the adoption of new management concepts like "lean production".

Many previously centralized responsibilities are now being assigned to the organizational units that are directly linked to the relevant information or to the production.

The assignment of business management functions like inventory management, central purchasing or financial accounting to the various organizational units is not the same in every company.

There is a tendency in some areas towards an increasing independence between business units within a company. This lends itself to the idea of modeling intra-company relationships along the same lines as customer-vendor relationships.

Market requirements have led to many changes in business processes. These have increased the demands on process flows in areas such as purchasing, sales and distribution, production and accounting.

The increasing integration of business processes means that they can no longer be modeled in terms of a single company only. Relationships with customers and vendors must also be considered.


Distributing these various tasks away from the center means that a high level of communication is demanded from integration functions. Fast access to information held in other areas is required (for example, the sales department may require information on the stocks of finished products in the individual plants).

Distributed Responsibilities in a Company.

Users of modern business data processing systems require:


a high degree of integration between business application systems to ensure effective modeling of business processes

decoupled application systems that can be implemented decentrally and independently of any particular technology.

The design, construction and operation of complex, enterprise-wide, distributed application systems remains one of the greatest challenges in data processing. The conventional solutions available today do not provide a totally satisfactory answer to the diverse needs of today's users.

Further standardization of business processes accompanied by ever tighter integration within a central system no longer represents a practicable approach to the problem.

The following are some of the most commonly encountered difficulties:

• technical bottlenecks,
• upgrade problems,
• the effect of time zones on international corporations,
• excessively long response times in large centralized systems.

For these reasons a number of R/2 customers operate several systems in parallel (arranged, for example, on a geographical basis). Whilst the three-tier client-server architecture of the R/3 System means that the significance of these technical restrictions is somewhat reduced, they are still present.

Whilst the idea of using distributed databases to implement distributed application systems sounds tempting, this is rarely a practical approach these days. The reasons for this include high communications overhead, uneconomic data processing operations and inadequate security mechanisms.

ALE - The Objectives

ALE (Application Link Enabling) supports the construction and operation of distributed applications. ALE handles the exchange of business data messages across loosely coupled SAP applications, ensuring that data is consistent. Applications are integrated by using synchronous and asynchronous communication, rather than by means of a central database.

ALE comprises three layers:

1. applications
2. distribution
3. communication

In order to meet the requirements of today's customers and to be open for future developments, ALE must meet the following challenges:

Communication between different software releases
Continued data exchange after a release upgrade without special maintenance.
Independence of the technical format of a message from its contents
Extensions that can be made easily, even by customers
Applications that are decoupled from the communication
Communications interfaces that allow connections to third-party applications
Support for R/3-R/2 scenarios

ALE - The Concept

The basic principle behind ALE is the guarantee of a distributed, yet fully integrated, R/3 System installation. Each application is self-sufficient and exists in the distributed environment with its own set of data.

Distributed databases are rarely a good solution today to the problem of data transport for the following reasons:

The R/3 System contains consistency checks that could not be performed in an individual database. Replicating tables in distributed databases would render these consistency checks useless.

Mirrored tables require two-phase commits. These result in a heavy loss of performance.

The distribution is controlled at the level of tables for distributed databases, and at the level of the applications in the case of ALE distribution.

Long distance access to distributed data can be difficult even today (because of error rates, a high level of network activity and long response times).

The use of self-sufficient systems implies a certain measure of data redundancy. Therefore data has to be both distributed and synchronized across the entire system. Communication is performed asynchronously.

For certain functions that require read-only access to information, direct requests have to be made between the remote systems, using synchronous RFC, or, if this is not available, CPI-C programs. The function modules and CPI-C programs are written as required for each application.

Summary

There are both technical and business-related benefits to be realized from the distribution of applications in an integrated network.

State-of-the-art communication technology and the client/server architecture have made the distribution of standard software technically possible.

Distributed databases do not represent a good solution for the distribution of control data, master data and transaction data.

Asynchronous exchange of data with a measure of data redundancy is the best solution utilizing today's technology.

The goal of ALE is to enable data exchange between R/3-R/3, R/2- R/3 and R/3-non-SAP systems.

Control data, master data and transaction data is transmitted.
ALE also supports release upgrades and customer modifications.
ALE allows a wide range of customer-specific field choices in the communication.
IDocs (Intermediate Documents) are used for the asynchronous communication.
Allowance is made for distribution in the various applications of the R/3 System.
The application initiates the distribution of the data.
ALE and EDI complement each other.


OUT BOUND PROCESING

In the output processing one of the function modules of the application creates an IDoc, the so-called master IDoc. This IDoc is sent to the ALE layer where the following processing steps are applied:

• receiver determination, if this has not already been done by the application
• data selection
• segment filtering
• field conversion
• version change

The resulting IDocs (it is possible that several IDocs could be created in the receiver determination) are referred to as communication IDocs and are stored in the database. The dispatch control then decides which of these IDocs should be sent immediately. These are passed to the communications layer and are sent either using the transactional Remote Function Call (RFC) or via file interfaces (e.g. for EDI).
If an error occurs in the ALE layer, the IDoc containing the error is stored and a workflow is created. The ALE administrator can use this workflow to process the error.

OUT BOUND PROCESING STEP BY STEP

Receiver Determination

An IDoc is similar to a normal letter in that it has a sender and a receiver. If the receiver has not been explicitly identified by the application, then the ALE layer uses the customer distribution model to help determine the receivers for the message.

The ALE layer can find out from the model whether any distributed systems should receive the message and, if so, then how many. The result may be that one, several or no receivers at all are found.

For each of the distributed systems that have been ascertained to be receiver systems, the data that is specified by the filter objects in the customer distribution model is selected from the master IDoc. This data is then used to fill an IDoc, and the appropriate system is entered as receiver.

Segment Filtering

Individual segments can be deleted from the IDoc before dispatch by selecting Functions for the IDoc processing ® Settings for filtering in ALE Customizing. The appropriate setting depends on the sending and receiving logical R/3 System.

Field Conversion

Receiver-specific field conversions are defined under Functions for the IDoc processing ® Conversions in ALE Customizing.

General rules can be specified for field conversions; these are important for converting data fields to exchange information between R/2 and R/3 Systems. For example, the field "plant" can be converted from a 2 character field to a 4 character field.

The conversion is done using general EIS conversion tools (Executive Information System).

IDoc Version Change

SAP ensures that ALE functions between different R/3 System releases. By changing the IDoc format you can convert message types of different R/3 releases. SAP Development use the following rules when converting existing message types:

• Fields may be appended to a segment type;
• Segments can be added;

ALE Customizing keeps a record of which version of each message type is in use for each receiver. The correct version of the communication IDoc is created in the ALE output.

Dispatch Control

Controlling the time of dispatch:

The IDocs can either be sent immediately or in the background processing. This setting is made in the partner profile.
If the IDoc is to be dispatched in batch, a job has to be scheduled. You can chose the execution frequency. (e.g. daily, weekly).

Controlling the amount of data sent:

• IDocs can be dispatched in packets. To define a packet size appropriate for a specific partner, select Communication ® Manual maintenance of partner profile ® Maintain partner profile in ALE Customizing.


Mass Processing of Idocs

Mass processing refers to bundles of IDoc packets, which are dispatched and processed by the receiving R/3 System. Only one RFC call is needed to transfer several IDocs. Performance is considerably better when transferring optimal packet sizes.
To define a mass processing parameter, select Communication ® Manual maintenance of partner profile ® Maintain partner profile. For a message type the parameters packet size and output mode can be defined.


If the output mode is set to "Collect IDocs", outbound IDocs of the same message type and receiver are sent in a scheduled background job or in the BALE transaction in appropriately sized IDoc packets. The IDocs can be dispatched in batch or in the BALE transaction code.

Some distribution scenarios cannot support mass processing of inbound IDoc packets. This is especially true if the application sending the IDocs uses the ABAP/4 command CALL TRANSACTION USING. In this case the outbound parameter PACKETSIZE must be set to "1".

To get a list of function modules that can be mass processed, select Enhancements ® Inbound ® specify inbound module in ALE Customizing. INPUTTYP is "0".

INBOUND PROCESING

After an IDoc has been successfully transmitted to another system, inbound processing is carried out in the receiver system, involving the following steps in the ALE layer:

• segment filtering
• field conversion
• data transfer to the application

There are three different ways of processing an inbound IDoc:

• A function module can be called directly (standard setting),
• A workflow can be started
• A work item can be started

INBOUND PROCESING STEP BY STEP

Segment Filtering

Segment filtering functions the same way in inbound processing as in outbound processing.


Field Conversion

Specific field conversions are defined in ALE Customizing.
The conversion itself is performed using general conversion tools from the EIS area (Executive Information System).

Generalized rules can be defined. The ALE implementation guide describes how the conversion rules can be specified.
One set of rules is created for each IDoc segment and rules are defined for each segment field.
The rules for converting data fields from an R/2-specific format to an R/3 format can be defined in this way. An example of this R/2 - R/3 conversion is the conversion of the plant field from a 2 character field to a 4 character field.

Input Control

When the IDocs have been written to the database, they can be imported by the receiver application.
IDocs can be passed to the application either immediately on arrival or can follow in batch.
You can post an inbound IDoc in three ways:

1. by calling a function module directly:

- A function is called that imports the IDoc directly. An error workflow will be started only if an error occurs.

2. by starting a SAP Business Workflow. A workflow is the sequence of steps to post an IDoc.

- Workflows for ALE are not supplied in Release 3.0.

3. by starting a work item

- A single step performs the IDoc posting.
The standard inbound processing setting is that ALE calls a function module directly. For information about SAP Business Workflow alternatives refer to the online help for ALE programming.

You can specify the people to be notified for handling IDoc processing errors for each message type in SAP Business Workflow.

Repeated Attempts to Pass the Idoc to the Aplication

If the IDoc could not be passed to the application successfully (status: 51 - error on handover to application), then repeated attempts may be made with the RBDMANIN report.
This functionality can be accessed through the menu: Logistics ® Central functions ® Distribution and then Period. work ® IDoc, ALE input
Selections can be made according to specific errors. Therefore this report could be scheduled as a periodic job that collects IDocs that could not be passed to the applications because of a locking problem.

Error Handling in ALE Ibound Processing

The following is a description of how an error that occurs during ALE processing is handled:

• The processing of the IDoc causing the error is terminated.
• An event is triggered.
• This event starts an error workitem:

- The employees responsible will find a workitem in their workflow inboxes.
- An error message is displayed when the workitem is processed.
- The error is corrected in another window and the IDoc can then be resubmitted for processing.
- If the error cannot be corrected, the IDoc can be marked for deletion.

Once the IDoc has been successfully imported, an event is triggered that terminates the error workitem. The workitem then disappears from the inbox.

Objects and Standard Tasks

Message Type Standard Task ID of Standard Task

BLAOCH 7975 BLAOCH_Error
BLAORD 7974 BLAORD_Error
BLAREL 7979 BLAREL_Error
COAMAS Keine
COELEM Keine
COPAGN 8062 COPAGN_Error
COPCPA 500002 COPCPA_Error
COSMAS 8103 COSMAS_Error
CREMAS 7959 CREMAS_Error
DEBMAS 8039 DEBMAS_Error
EKSEKS 8058 EKSEKS_Error
FIDCMT 8104 FIDCMT_Error
FIROLL 8113 FIROLL_Error
GLMAST 7950 GLMAST_Error
GLROLL 7999 GLROLL_Error
INVCON 7932 INVCON_Error
INVOIC 8057 INVOIC_MM_Er
MATMAS 7947 MATMAS_Error
ORDCHG 8115 ORDCHG_Error
ORDERS 8046 ORDERS_Error
ORDRSP 8075 ORDRSP_Error
SDPACK Keine
SDPICK 8031 SDPICK_Error
SISCSO 8059 SISCSO_Error
SISDEL 8060 SISDEL_Error
SISINV 8061 SISINV_Error
SOPGEN 8063 SOPGEN_Error
WMBBIN 8047 WMBBIN_Error
WMCATO 7968 WMCATO_Error
WMCUST 8049 WMCUST_Error
WMINFO 8032 WMINFO_Error
WMINVE 7970 WMINVE_Error
WMMBXY 8009 WMMBXY_Error
WMSUMO 8036 WMSUMO_Error
WMTOCO 7972 WMTOCO_Error
WMTORD 8013 WMTORD_Error
WMTREQ 8077 WMTREQ_Error
COSFET COSFET_Error
CREFET CREFET_Error
DEBFET DEBFET_Error
GLFETC GLFETC_Error
MATFET MATFET_Error

EDI Message Types
Message Type Standard Task Functional Area

DELINS 8000 DELINS_Error
EDLNOT 8065 EDLNOT_error
INVOIC 8056 INVOIC_FI_Er
REMADV 7949 REMADV_Error
ALE QUICK START

This documentation describes how to configure a distribution in your R/3 Systems using Application Link Enabling (ALE). You will learn how to create a message flow between two clients and how to distribute materials. You will get familiar with the basic steps of the ALE configuration.

To set up and perform the distribution, proceed as follows:


1. Setting Up Clients
2. Defining A Unique Client ID
3. Defining Technical Communications Parameters
4. Modeling the Distribution
5. Generating Partner Profiles in the Sending System
6. Distributing the Customer Model
7. Generating Partner Profiles in the Receiving System
8. Creating Material Master Data
9. Sending Material Master Data
10.Checking Communication .


1. Setting Up Clients :

You must first set up two clients to enable communication. The two clients may be located on the same physical R/3 System or on separate systems.


You can either use existing clients or you can create new clients by making copies of existing ones (for example, a copy of client 000 or a client of the International Demo System (IDES)). To create new clients, you use the Copy source client function. You will find this function in the Customizing (Tools ® Business Engineering ® Customizing) under Basic functions ® Set up clients. Here you will also find additional information on setting up the clients.

Example: Clients 100 and 200 are available. Both are copies of client 000.

2. Defining A Unique Client ID :

To avoid any confusion, it is necessary for participating systems in a distributed environment to have an unique ID. The name of the "logical System" is used as the unique ID. This name is assigned explicitly to one client on an R/3 System.
When you have set up two clients for the exercise, you must tell them which logical systems exist in the distributed environment and what the description of their own client is. You will find the functions you require in the Customizing for ALE under Basic configuration ® Set up logical system.

Example : Client 100 is described as logical system LOGSYS0100.
Client 200 is described as logical system LOGSYS0200.

To maintain the logical systems in the distributed environment, choose Maintain logical systems, and

Execute the function and enter a logical system (LOG. SYSTEM) and a short text for each of your clients.

Save your entries.

When using two clients in different systems, make sure that you maintain identical entries in both systems. When using two clients in one physical R/3 System, you have to make the settings only once, since the entries are client-independent.

Log. System Short text

LOGSYS0100 System A, client 100
LOGSYS0200 System B, client 200

Allocate the corresponding logical systems to both clients using the

Allocate logical system to the client function:

Execute the function in each of the two clients.
In the view, double-click on the corresponding client.
In the Logical system field, enter the logical system name to be assigned to the indivdual client.
Save your entry.

In client Logical system

100 LOGSYS0100
200 LOGSYS0200

3.Defining Technical Communications Parameters

For the two logical systems to be able to communicate with one another, each must know how to reach the other technically. This information is found in the RFC destination.

On each of the two clients, you must maintain the RFC destination for the other logical system. You will find the function you require in the Customizing for ALE under the item Communication ® Define RFC destination.

Execute the function.
Choose Create.
Define the RFC destination:
- For the name of the destination, use the name of the logical system which is to refer to the destination (use UPPERCASE letters).

In client 100 you maintain the RFC destination LOGSYS0200.
In client 200 you maintain the RFC destination LOGSYS0100.

- As Connection type, choose 3.
- Enter a description of the RFC destination.

'RFC destination for the logical system LOGSYS0200' as a description of destination LOGSYS0200.

- As logon parameters, enter the logon language (for example, E), the logon client (for example, 200 for LOGSYS0200) and the logon user (user ID with target system password).
- Choose Enter.
- Enter the target machine and the system number:

The target machine indicates which receiving system application server is to handle communication. You can enter the specifications as UNIX host name, as host name in DNS format, as IP address or as SAP router name.
If you use SAP Logon, you can retrieve the information via Server selection ® Servers. Choose the corresponding SAP System ID and then OK. The system displays a list of all available application servers.

The system number indicates the service used (TCP service, SAP system number). When using SAP Logon, you can get the system number by selecting the system on the inital screen and then choosing EDIT.

- Save your entries.
- After saving the RFC destination, you can use Test connection to test the connection, and attempt a remote logon via Remote Login. If you succeed, the system displays a new window of the other system. Choose System ® Status... to check that you are in the correct client.

Define RFC Destination :

In this section, you define the technical parameters for the RFC destinations.
The Remote Function Call is controlled via the parameters of the RFC destination.
The RFC destinations must be maintained in order to create an RFC port.
The name of the RFC destination should correspond to the name of the logical system in question.
The following types of RFC destinations are maintainable:

• R/2 links
• R/3 links
• internal links
• logical destinations
• CMC link
• SNA/CPI-C connections
• TCP/IP links
• links of the ABAP/4 drivers

Example :

1. Enter the following parameters for an R/3 link:

- name for RFC destination: S11BSP001
- link type: 3 (for R/3 link)
- target machine: bspserver01
- system number: 11
- user in target machine: CPIC
- password, language and target client.

Standard settings

In the standard system, no RFC destinations are maintained.

Activities

1. Click on one the categories (for example, R/3 links) and choose Edit -> Create;
2. Enter the required parameters dependent on the type.
3. For an R/3 link, that is, for example, the name of the RFC destination, the name of the partner machine, logon parameter (see example).

For an R/2 connection select the option 'Password unlocked' in the log-on parameters. To test an R/2 connection you cannot use the transaction connection test, you have to use Report ACPICT1 which sets up a test connection to client 0 of the host destination. Select the check boxes for the parameters ABAP and CONVERT.

Processing RFCs with errors

If errors occur in a Remote Function Call, these are processed in the standard in the single error processing. A background job is scheduled for each RFC that resulted in an error, and this background job keeps restarting the RFC until the RFC has been processed successfully. In the case that the connection to the recipient system has been broken, this can mean that a very large of background jobs gets created that will represent a considerable additional load on the sending system.


You should always use the collective error processing in productive operation so as to improve the system performance. This will not automatically re-submit the RFC immediately, but a periodically scheduled background job will collect together all the RFCs that failed and will re-start them as a packet. This helps to reduce the number of background jobs created. This can be done both for R/3 connections and for TCP/IP connections.

To set up the collective error processing proceed as follows:

• Change the RFC destination
• Select the Destination -> TRFC options function from the menu.
• Enter the value 'X' into the 'Suppress backgr. job in case of comms. error' field.

Perform the error handling as follows:

• Start the 'Transactional RFC' monitor (menu: Logistics -> Central functions -> Distribution -> Monitoring -> Transactional RFC)
• Select the Edit -> Select.FM execute function.

For the error handling you should schedule a periodic background job that regularly does this.

Train the error handling for errors in the Remote Function Call before the prodictive start.

Further notes

The 'SAP*' user may not be used on the target machine for Remote Function Calls.

Notes on the transport

The maintenance of the RFC destination is not a part of the automatic transport and correction system. Therefore the setting has to be made manually on all systems.



8 comments:

  1. Ecorptrainings.com provides sap ale in hyderabad with best faculties on real time projects. We give the best online trainingamong the sap ale in Hyderabad.
    Classroom Training in Hyderabad India

    ReplyDelete
  2. Thanks for all the information, it was very helpful I really like that you are providing information............................Please contact us for Oracle Fusion HCM Training details in our Erptree Training Institute

    ReplyDelete
  3. Thanks for posting the blog. I felt comfortable while reading the post............................Go through our CALFRE website to known much more about Oracle Fusion Financials Training Institute details.

    ReplyDelete
  4. Wow, this blog is very nice I really like your blog and i am Impressed thank you very much for posting this blog.....................For more Information About Google Cloud Flatform Training.

    ReplyDelete
  5. Wow, this blog is very nice I really like your blog and i am Impressed thank you very much for posting this blog.....................For more Information About Google Cloud Flatform Training.

    ReplyDelete
  6. thanks for sharing nice blog keep posting like this if like more visit it https://snowflakemasters.in/

    ReplyDelete
  7. Thanks for sharing nice blog keep posting like this
    https://www.fastprepacademy.com/gmat-coaching-in-hyderabad/

    ReplyDelete