Oracle Fusion Cloud: Redwood Data Extraction Feature
Comprehensive Implementation & Reference Guide
📋 Executive Summary
Feature Name: Redwood: Extract Application Data from a Read-Optimized Data Store Using a Redwood Page
Release: Oracle Fusion Cloud Update 26A (Late 2025)
Purpose: Modern, high-performance data extraction tool that offloads data retrieval from the transactional database to a dedicated read-optimized replica, eliminating performance impact on production systems.
Key Innovation: Separates read operations (reporting/extraction) from write operations (transactions) using a continuously synchronized Autonomous Data Warehouse replica.
🎯 What Problem Does This Solve?
Traditional Challenge
- Data extraction ran directly against the Fusion transactional (OLTP) database
- Heavy extraction queries competed with live business transactions
- Performance degradation during large data pulls
- Risk of timeouts and system slowdowns
- Impact on user experience during peak extraction times
Modern Solution
- Extraction occurs on a separate read-optimized ADW replica
- Zero impact on transactional workload
- Near real-time data synchronization (typically seconds to minutes lag)
- Significantly faster extraction for large datasets
- Scalable architecture supporting concurrent extractions
🏗️ Architecture Overview
┌─────────────────────────────────────────────────────────┐
│ USER INTERACTION │
│ Tools > Data Extraction (Redwood UI) │
└─────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────┐
│ BUSINESS OBJECT LAYER │
│ (Abstracts complex Fusion schema into logical views) │
└─────────────────────────────────────────────────────────┘
↓
┌──────────────────────┐ ┌──────────────────┐
│ TRANSACTIONAL DB │ ═══sync═══> │ READ-OPTIMIZED │
│ (Fusion OLTP) │ near real │ DATA STORE │
│ - Live operations │ time │ (ADW Replica) │
│ - Writes/Updates │ │ - Extractions │
│ - User transactions │ │ - Reporting │
└──────────────────────┘ └──────────────────┘
NOT used for Used for ALL
extraction data extraction
Component Breakdown
| Component | Role | Technology |
|---|---|---|
| Transactional Database | Handles live business operations | Oracle Fusion OLTP Database |
| Read-Optimized Data Store | Dedicated extraction & reporting | Autonomous Data Warehouse (ADW) |
| Replication Layer | Continuous data sync | Near real-time CDC (Change Data Capture) |
| Business Object Layer | Simplified data access | Abstraction views over Fusion schema |
| Redwood UI | User interface | Modern Oracle Redwood design system |
✨ Key Benefits & Value Proposition
1️⃣ Performance Gains
- 10-100x faster extraction for large datasets
- Parallel processing capabilities on ADW
- Optimized for analytical queries
- No resource contention with transactions
2️⃣ Zero Production Impact
- Transactional system remains untouched during extraction
- No risk of slowing down user operations
- Run extractions during business hours without concern
- Eliminates need for off-hours extraction windows
3️⃣ Data Currency
- Near real-time synchronization (typically < 5 minutes lag)
- Acceptable for most reporting/integration scenarios
- Configurable sync frequency based on needs
4️⃣ Simplified Access
- Business objects hide complex Fusion table relationships
- Logical entity-based extraction (e.g., "Purchase Orders", "Invoices")
- No need to understand underlying database schema
- Consistent interface across all modules
5️⃣ Scalability
- ADW automatically scales for large extractions
- Support for concurrent extraction jobs
- Handles millions of records efficiently
- Future-proof architecture
6️⃣ Modern User Experience
- Intuitive Redwood-based interface
- Self-service extraction capabilities
- Built-in scheduling and monitoring
- Download management integrated
🔧 Complete Setup & Configuration Guide
Step 1: Enable the Feature (Opt-In)
Navigation Path:
Setup and Maintenance → Search: "Manage Features" → Opt In Features
Instructions:
- Search for feature: "Redwood: Extract Application Data from a Read-Optimized Data Store Using a Redwood Page"
- Enable under any available offering:
- Manufacturing
- Supply Chain Materials Management
- Procurement
- (Other applicable modules)
Important Notes:
- ✅ Feature can be enabled under any single offering
- ✅ Once enabled, works for all product families (cross-module capability)
- ✅ Not limited to the module where you opt in
- ⚠️ Ensure parent feature in hierarchy is also enabled
- ⚠️ May require environment refresh after enablement
Step 2: Verify Environment Configuration
Critical Requirement: Your Oracle Cloud environment must have the read-optimized data store provisioned and configured.
Verification Steps:
-
Contact Oracle Support via your designated help desk
-
Request verification of:
- ADW replica provisioning status
- Replication configuration
- Network connectivity between instances
- Required backend services enablement
-
Provide Oracle Support with:
- Your Pod/Instance details
- Feature opt-in confirmation
- Business justification for the feature
Oracle Support Will Verify:
- ✅ ADW instance exists and is configured
- ✅ Replication pipelines are active
- ✅ Data synchronization is functioning
- ✅ Required services are enabled in your environment
⚠️ Do not proceed until Oracle Support confirms configuration is complete.
Step 3: Enable Security Console Integration
Profile Option to Set:
| Field | Value |
|---|---|
| Profile Name | Enable Security Console External Application Integration |
| Profile Code | ORA_ASE_SAS_INTEGRATION_ENABLED |
| Level | Site |
| Value | Yes |
Configuration Path:
Setup and Maintenance → Search: "Manage Administrator Profile Values"
Detailed Steps:
- In the Search field, enter:
ORA_ASE_SAS_INTEGRATION_ENABLED - Click Search
- Locate the profile option
- Click Edit (pencil icon)
- Set Site Level value to:
Yes - Click Save and Close
What This Enables:
- Integration between Security Console and external applications
- Necessary for permission group management
- Required for role-based access control to the extraction tool
Step 4: Assign User Access Privileges
Required Privileges: Users need specific function security privileges to access the Data Extraction Tool.
Steps:
- Navigate to: Security Console → Manage Job Roles and Abstract Roles
- Search for the role(s) you want to grant access to
- Add the following privileges (exact list varies - check official documentation):
- Data Extraction Tool access
- Business object query privileges
- Extract file management privileges
Enable Permission Groups:
- For each role that needs access:
- Edit the role
- Navigate to Permission Groups section
- Ensure relevant permission groups are enabled
- Permission groups must be active for Security Console integration
Verification:
- User should see Tools → Data Extraction in navigation menu
- User can create new extraction requests
- User can view extraction history
Step 5: Grant File Download Access
Required Role:
| Role Name | Role Code |
|---|---|
| Upload and download data from on-premise system to cloud system | OBIA_EXTRACTTRANSFORMLOAD_RWD |
Assignment Steps:
- Navigate to: Security Console → Manage Users
- Search for the user
- Click Edit
- Go to Roles tab
- Click Add Role
- Search for:
OBIA_EXTRACTTRANSFORMLOAD_RWD - Select and add the role
- Click Save and Close
Alternative (Assign to Job Role):
- Edit the job role used by data extraction users
- Add
OBIA_EXTRACTTRANSFORMLOAD_RWDas an inherited role - All users with that job role will inherit download permissions
What This Enables:
- ✅ View completed extraction files
- ✅ Download extract outputs to local system
- ✅ Access file metadata and details
- ✅ Delete old extraction files (if permitted)
📱 Using the Data Extraction Tool
Navigation
Navigator → Tools → Data Extraction
Main Interface Components
1. Create New Extract
- Select business objects to extract
- Define filters and parameters
- Schedule or run immediately
- Specify output format
2. Extraction History
- View all past extractions
- Monitor running jobs
- Check completion status
- Download completed files
3. Scheduled Extractions
- Manage recurring extracts
- Edit schedules
- Enable/disable scheduled jobs
4. Extract File Repository
- Browse available files
- Download extracts
- View file metadata (size, record count, etc.)
- Delete old files
Creating an Extract
Step-by-Step Process:
-
Click "Create Extract"
-
Select Business Object(s)
- Choose from available entities (e.g., Purchase Orders, Invoices, Items)
- Multiple objects can be selected if related
- Each object represents a logical business entity
-
Define Parameters
- Date Range: Specify data currency (e.g., last modified date)
- Filters: Apply where clauses (e.g., Organization = "US Operations")
- Attributes: Select specific columns/fields to extract
- Format: Choose output format (CSV, XML, JSON, etc.)
-
Schedule or Run
- Run Now: Immediate extraction
- Schedule: Set recurrence pattern
- Daily
- Weekly
- Monthly
- Custom cron expression
-
Submit
- Job is queued
- Extraction runs against ADW replica
- User receives notification on completion
-
Download Results
- Navigate to extraction history
- Click download icon
- Files are typically compressed (ZIP)
💼 Use Cases & Applications
1. Enterprise Reporting
Scenario: Monthly financial reports requiring data from multiple Fusion modules
Implementation:
- Extract GL, AP, AR, and FA data
- Schedule monthly runs on 1st of each month
- Feed into enterprise BI tools (Tableau, Power BI, OBIEE)
- No impact on month-end close activities
Benefits:
- Reports run faster
- No conflict with financial close processes
- Consistent data format for BI tools
2. Data Lake Population
Scenario: Building a central data lake for advanced analytics and ML
Implementation:
- Daily incremental extracts of transactional data
- Extract to cloud storage (OCI Object Storage, AWS S3)
- Combine with data from other systems
- Enable advanced analytics and machine learning
Benefits:
- Fusion data available for cross-system analysis
- Support for big data technologies
- Historical data preservation
3. Third-Party Integration
Scenario: Feeding Fusion data to external systems (CRM, legacy ERP, partner portals)
Implementation:
- Scheduled extracts of specific business objects
- API or file-based integration
- Near real-time data sync with partners
- Automated downstream processing
Benefits:
- Reliable data integration
- Predictable performance
- Simplified integration architecture
4. Compliance & Audit Reporting
Scenario: Quarterly compliance reports requiring detailed transaction history
Implementation:
- Extract complete audit trail data
- Include all fields for regulatory requirements
- Archive extracts for compliance retention
- Generate reports without system impact
Benefits:
- Complete data capture
- No disruption to business operations
- Audit-ready data extracts
5. Data Warehouse Loading
Scenario: Populate enterprise data warehouse with Fusion data
Implementation:
- Full and incremental extract strategies
- ETL processes consume extracted files
- Dimension and fact table population
- Historical trending and analysis
Benefits:
- Efficient bulk data transfer
- Minimal source system impact
- Support for complex transformations
🆚 Comparison: Old vs. New Approach
| Aspect | Legacy BICC / Direct DB Extraction | Redwood Read-Optimized Extraction |
|---|---|---|
| Data Source | Transactional OLTP database | Dedicated ADW replica |
| Performance Impact | High - competes with transactions | Zero - isolated workload |
| Extraction Speed | Slower - not optimized for analytics | 10-100x faster - optimized for reads |
| Concurrent Users | Limited - resource contention | Scalable - dedicated resources |
| Data Currency | Real-time | Near real-time (< 5 min lag) |
| Schema Access | Complex Fusion tables | Simplified business objects |
| User Interface | Legacy UI | Modern Redwood interface |
| Scheduling | Basic | Advanced with monitoring |
| Best Time to Run | Off-hours to avoid impact | Anytime - no impact |
| Scalability | Limited by OLTP capacity | Elastic ADW scaling |
⚠️ Important Considerations & Limitations
Data Latency
- Near real-time ≠ Real-time
- Typical sync lag: 2-5 minutes
- Not suitable for applications requiring instant data consistency
- Check data freshness requirements before implementation
Feature Availability
- Must be enabled via Opt-In process
- Requires backend environment configuration
- May not be available in all Oracle Cloud regions initially
- Confirm availability with Oracle Support
Role & Security
- Extraction respects Fusion security policies
- Users only see data they have access to
- Data-level security is enforced
- Test with different user roles
Data Volume
- While scalable, extremely large extracts (100M+ records) may require:
- Batch processing
- Incremental strategies
- File splitting
File Storage
- Extracted files stored temporarily
- Set retention policies
- Regular cleanup recommended
- Monitor storage quotas
Cross-Module Extraction
- Can extract from any product family once enabled
- Business object availability varies by module
- Some objects may not be available in read-optimized store
- Verify specific object availability before planning extractions
🔍 Troubleshooting Guide
Issue: Feature Not Visible After Opt-In
Possible Causes:
- Parent feature not enabled
- Cache not refreshed
- User lacks privileges
Resolution:
- Sign out and sign back in
- Verify parent feature is enabled
- Check user role assignments
- Contact Oracle Support if persists
Issue: "Environment Not Configured" Error
Possible Causes:
- ADW replica not provisioned
- Replication not active
Resolution:
- Open SR with Oracle Support
- Request environment verification
- Provide feature opt-in details
- Wait for backend configuration
Issue: Extract Job Fails
Possible Causes:
- Business object not available in replica
- Invalid filter criteria
- Data volume exceeds limits
Resolution:
- Check job log for specific error
- Verify business object availability
- Simplify filters and retry
- Break into smaller extracts if needed
Issue: Data Appears Outdated
Possible Causes:
- Replication lag
- Sync process delayed
Resolution:
- Check replication status
- Wait 5-10 minutes and retry
- Contact Oracle Support if lag > 30 minutes
Issue: Cannot Download Files
Possible Causes:
- Missing download role
- File expired/deleted
- Browser issues
Resolution:
- Verify
OBIA_EXTRACTTRANSFORMLOAD_RWDrole assigned - Check file still exists in repository
- Try different browser
- Clear cache and retry
📚 Additional Resources
Oracle Documentation
- What's New in Oracle Fusion Cloud (Release 26A)
- Oracle Fusion Cloud SCM: Using Data Extraction
- Oracle Fusion Cloud Common Features Guide
- Redwood Design System Documentation
Support
- Oracle Support: My Oracle Support (MOS)
- Search: "Redwood Data Extraction" in MOS
- Community: Oracle Cloud Customer Connect
Training
- Oracle University: Fusion Cloud Administrator courses
- Partner training on Redwood features
- Oracle Learning Library videos
✅ Implementation Checklist
Use this checklist when implementing the feature:
-
[ ] Planning
- [ ] Identify use cases and business requirements
- [ ] Document required business objects
- [ ] Define extraction schedules
- [ ] Plan downstream integration
-
[ ] Environment Setup
- [ ] Enable feature via Opt-In
- [ ] Verify parent feature enabled
- [ ] Contact Oracle Support for environment verification
- [ ] Confirm ADW provisioning complete
-
[ ] Security Configuration
- [ ] Set profile option:
ORA_ASE_SAS_INTEGRATION_ENABLED = Yes - [ ] Assign data extraction privileges to roles
- [ ] Enable permission groups
- [ ] Assign download role:
OBIA_EXTRACTTRANSFORMLOAD_RWD - [ ] Test access with different user personas
- [ ] Set profile option:
-
[ ] Testing
- [ ] Create test extraction (small dataset)
- [ ] Verify file download
- [ ] Validate data accuracy
- [ ] Test scheduled extractions
- [ ] Measure performance vs. legacy methods
-
[ ] Production Rollout
- [ ] Create production extraction jobs
- [ ] Set up monitoring and alerts
- [ ] Document procedures for users
- [ ] Train end users
- [ ] Establish support process
-
[ ] Ongoing Management
- [ ] Monitor extraction performance
- [ ] Review and clean old files
- [ ] Update schedules as needed
- [ ] Stay current with Oracle updates
🎓 Quick Reference Card
Navigate to Tool:
Tools → Data Extraction
Required Profile Option:
ORA_ASE_SAS_INTEGRATION_ENABLED = Yes
Required Download Role:
OBIA_EXTRACTTRANSFORMLOAD_RWD
Data Freshness:
Near real-time (typically < 5 minutes lag)
Performance:
10-100x faster than legacy extraction
Zero impact on transactional system
Support Contact:
Oracle Support via My Oracle Support (MOS)
📝 Summary
The Redwood: Extract Application Data from a Read-Optimized Data Store feature represents a significant architectural improvement in Oracle Fusion Cloud's data extraction capabilities. By separating analytical workloads from transactional processing:
✅ Organizations gain: Faster extractions, zero production impact, and simplified data access
✅ Users benefit from: Modern interface, self-service capabilities, and reliable scheduling
✅ IT benefits include: Scalable architecture, reduced system load, and future-proof design
This feature is essential for any organization running reporting, analytics, or integration workflows that require regular or large-volume data extraction from Oracle Fusion Cloud.
