Expand data access through Apache Iceberg using Delta Lake UniForm on AWS
The landscape of big data management has been transformed by the rising popularity of open table formats such as Apache Iceberg, Apache Hudi, and Linux Foundation Delta Lake. These formats, designed to address the limitations of traditional data storage systems, have become essential in modern data architectures. As organizations adopt various open table formats to suit their specific needs, the demand for interoperability between these formats has grown significantly. This interoperability is crucial for enabling seamless data access, reducing data silos, and fostering a more flexible and efficient data ecosystem.
Delta Lake UniForm is an open table format extension designed to provide a universal data representation that can be efficiently read by different processing engines. It aims to bridge the gap between various data formats and processing systems, offering a standardized approach to data storage and retrieval. With UniForm, you can read Delta Lake tables as Apache Iceberg tables. This expands data access to broader options of analytics engines.
This post explores how to start using Delta Lake UniForm on Amazon Web Services (AWS). You can learn how to query Delta Lake native tables through UniForm from different data warehouses or engines such as Amazon Redshift as an example of expanding data access to more engines.
How Delta Lake UniForm works
UniForm allows other table format clients such as Apache Iceberg to access Delta Lake tables. Under the hood, UniForm generates Iceberg metadata files (including metadata and manifest files) that are required for Iceberg clients to access the underlying data files in Delta Lake tables. Both Delta Lake and Iceberg metadata files reference the same data files. UniForm generates multiple table format metadata without duplicating the actual data files. When an Iceberg client reads a UniForm table, it first accesses the Iceberg metadata files for the UniForm table, which then allows the Iceberg client to read the underlying data files.
There are two options to use UniForm:
- Create a new Delta Lake table with UniForm
- Enable UniForm on your existing Delta Lake table
First, to create a new Delta Lake table enabling UniForm, you configure table properties for UniForm in a CREATE TABLE
DDL query. The table properties are 'delta.universalFormat.enabledFormats'='iceberg'
and 'delta.enableIcebergCompatV2'='true'
. When these options are set to the CREATE TABLE
query, Iceberg metadata files are generated along with Delta Lake metadata files. In addition to these options, Delta Lake table protocol versions that define supported features by the table such as delta.minReaderVersion
and delta.minWriterVersion
are required to be set to 2
and 7
or more respectively. For more information about the table protocol versions, refer to What is a table protocol specification? in Delta Lake public document. Appendix 1. Create a new Delta Lake table with UniForm shows an example query to create a new Delta Lake UniForm table.
You can also enable UniForm on an existing Delta Lake table. This option is suitable if you have Delta Lake tables in your environment. Enabling UniForm doesn’t affect your current operations on the Delta Lake tables. To enable UniForm on a Delta Lake table, run REORG TABLE db.existing_delta_lake_table APPLY (UPGRADE UNIFORM(ICEBERG_COMPAT_VERSION=2))
. After running this query, Delta Lake automatically generates Iceberg metadata files for the Iceberg client. In the example in this post, you run this option and enable UniForm after you create a Delta Lake table.
For the information about enabling UniForm, refer to Enable Delta Lake UniForm in the Delta Lake public document. Note that the extra package (delta-iceberg
) is required to create a UniForm table in AWS Glue Data Catalog. The extra package is also required to generate Iceberg metadata along with Delta Lake metadata for the UniForm table.
Example use case
A fictional company built a data lake with Delta Lake on Amazon Simple Storage Service (Amazon S3) that’s mainly used through Amazon Athena. According to its usage expansion, this company wants to expand data access to cloud-based data warehouses such as Amazon Redshift for flexible analytics use cases.
There are a few challenges to achieve this requirement. Delta Lake isn’t natively supported in Amazon Redshift. For those data warehouses, Delta Lake tables need to be converted to manifest tables, which requires additional operational overhead. You need to run the GENERATE
command on Spark or use a crawler in AWS Glue to generate manifest tables, and you need to sync those manifest tables every time the Delta tables are updated.
Delta Lake UniForm can be a solution to meet this requirement. With Delta Lake UniForm, you can make the Delta Table compatible with the other open table formats such as Apache Iceberg, which is natively supported in Amazon Redshift. Users can query those Delta Lake tables as Iceberg tables through UniForm.
The following diagram describes the architectural overview to achieve that requirement.
In this tutorial, you create a Delta Lake table with a synthetic review dataset that includes different products and customer reviews and enable UniForm on that Delta Lake table to make it accessible from Amazon Redshift. Each component works as follows in this scenario:
- Amazon EMR (Amazon EMR on EC2 cluster with Apache Spark): An Apache Spark application on an Amazon EMR cluster creates a Delta Lake table and enables UniForm on it. Only Delta Lake client can write to the Delta Lake UniForm table, making Amazon EMR act as a writer.
- Amazon Redshift: Amazon Redshift uses Iceberg clients to read records from the Delta Lake UniForm table. It’s limited to reading records from the table and cannot write to it.
- Amazon S3 and AWS Glue Data Catalog: These are used to manage the underlying files and the catalog of the Delta Lake UniForm table. The data and metadata files for the table are stored in an S3 bucket. The table is registered in AWS Glue Data Catalog.
Set up resources
In this section, you complete the following resource setup:
- Launch an AWS CloudFormation template to configure resources such as S3 buckets, an Amazon Virtual Private Cloud (Amazon VPC) and a subnet, a database for Delta Lake in Data Catalog, AWS Identity and Access Management (IAM) policy and role with required permissions for Amazon EMR Studio, and an EC2 instance profile for Amazon EMR on EC2 cluster
- Launch an Amazon EMR on EC2 cluster
- Create an Amazon EMR Studio Workspace
- Upload a Jupyter Notebook on Amazon EMR Studio Workspace
- Launch a CloudFormation template to configure Amazon Redshift Serverless and relevant subnets
Launch a CloudFormation template to configure basic resources
You use a provided CloudFormation template to set up resources to build Delta Lake UniForm environments. The template creates the following resources.
- An S3 bucket to store the Delta Lake table data
- An S3 bucket to store an Amazon EMR Studio Workspace metadata and configuration files
- An IAM role for Amazon EMR Studio
- An EC2 instance profile for Amazon EMR on EC2 cluster
- VPC and subnet for an Amazon EMR on EC2 cluster
- A database for a Delta Lake table in Data Catalog
Complete the following steps to deploy the resources.
- Choose Launch stack:
- For Stack name, enter
delta-lake-uniform-on-aws
. For the Parameters, DeltaDatabaseName, PublicSubnetForEMRonEC2, and VpcCIDRForEMRonEC2 are set by default. You can also change the default values. Then, choose Next. - Choose Next.
- Choose I acknowledge that AWS CloudFormation might create IAM resources with custom names.
- Choose Submit.
- After the stack creation is complete, check the Outputs. The resource values are used in the following sections and in the Appendices.
Launch an Amazon EMR on EC2 cluster
Complete the following steps to create an Amazon EMR on EC2 cluster.
- Open the Amazon EMR on EC2 console.
- Choose Create cluster.
- Enter
delta-lake-uniform-blog-post
in Name and confirm choosing emr-7.3.0 as its release label. - For Application bundle, select Spark 3.5.1, Hadoop 3.3.6 and JupyterEnterpriseGateway 2.6.0.
- For AWS Glue Data Catalog settings, enable Use for Spark table metadata.
- For Networking, enter the values from the CloudFormation Outputs tab for VpcForEMR and PublicSubnetForEMR into Virtual private cloud (VPC) andSubnet respectively. For EC2 security groups, keep Create ElasticMapReduce-Primary for Primary node, and Create ElasticMapReduce-Core for Core and task nodes. The security groups for the Amazon EMR primary and core nodes are automatically created.
- For Cluster logs, enter
s3://<DeltaLakeS3Bucket>/emr-cluster-logs
as the Amazon S3 location. Replace<DeltaLakeS3Bucket>
with the S3 bucket from the CloudFormation stack Outputs tab. - For Software settings, select Load JSON from Amazon S3 and enter
s3://aws-blogs-artifacts-public/artifacts/BDB-4538/config.json
as the Amazon S3 location. - For Amazon EMR service role in Identity and Access Management (IAM) roles section, choose Create a service role. Then, set default to Security Group. If there are existing security groups for Amazon EMR Primary and Core or Task nodes, set those security groups to Security Group.
- For EC2 instance profile for Amazon EMR, choose Choose an existing instance profile, and set EMRInstanceProfileRole to Instance profile.
- After reviewing the configuration, choose Create cluster.
- After the cluster status is Waiting on the Amazon EMR console, the cluster setup is complete (it approximately takes 10 minutes).
Create an Amazon EMR Studio Workspace
Complete the following steps to create an Amazon EMR Studio Workspace to use Delta Lake UniForm on Amazon EMR on EC2.
- Open Amazon EMR Studio console.
- Choose Create Studio.
- For Setup options, choose Custom.
- For Studio settings, enter
delta-lake-uniform-studio
as the Studio name. - For S3 location for Workspace storage, choose Select existing location as a Workspace storage. The S3 location (
s3://aws-emr-studio-<ACCOUNT_ID>-<REGION>-delta-lake-uniform-on-aws
) can be obtained from EMRStudioS3Bucket on the CloudFormation Outputs tab. Then, choose EMRStudioRole as the IAM role (you can find the IAM Role name on the CloudFormation Outputs tab). - For Workspace settings, enter
delta-lake-workspace
as the Workspace name. - In Networking and security, choose the VPC ID and Subnet ID that you created in Launch an AWS CloudFormation template. You can obtain the VPC ID and Subnet ID from the
VpcForEMR
andPublicSubnetForEMR
keys on the CloudFormation Outputs tab respectively. - After reviewing the settings, choose Create Studio and launch Workspace.
- After creating the Studio Workspace is complete, you are redirected to Jupyter Notebook.
Upload Jupyter Notebook
Complete the following steps to configure a Jupyter Notebook to use Delta Lake UniForm with Amazon EMR.
- Download delta-lake-uniform-on-aws.ipynb.
- Choose the arrow icon at the top of the page and upload the Notebook you just downloaded.
- Choose and open the notebook (
delta-lake-uniform-on-aws.ipynb
) you uploaded in the left pane. - After the notebook is opened, choose EMR Compute in the navigation pane.
- Attach the Amazon EMR on EC2 cluster you created in the previous section. Choose EMR on EC2 cluster and set the cluster you created previously to EMR on EC2 cluster, then choose Attach.
- After attaching the cluster is successful, Cluster is attached to the Workspace is displayed on the console.
Create a workgroup and a namespace for Amazon Redshift Serverless
For this step, you configure a workgroup and a namespace for Amazon Redshift Serverless to run queries on a Delta Lake UniForm table. You also configure two subnets in the same VPC created by the CloudFormation stack delta-lake-uniform-on-aws
. To deploy the resources, complete the following steps:
- Choose Launch stack:
- For Stack name, enter
redshift-serverless-for-delta-lake-uniform
. - For Parameters, enter the Availability Zone and an IP range for each subnet. The VPC ID is automatically retrieved from the CloudFormation stack you created in Launch an AWS CloudFormation template to configure basic resources. If you change the default subnet, note that at least one subnet needs to be the same subnet you created for the Amazon EMR on EC2 cluster (by default, the subnet for Amazon EMR on EC2 cluster is automatically retrieved during this CloudFormation stack creation). You can check the subnet for the cluster on the CloudFormation Outputs Then, choose Next.
- Choose Next again, and then choose Submit.
- After the stack creation is complete, check the CloudFormation Outputs Make a note of the two Subnet IDs on the Outputs tab to use later in Run queries from Amazon Redshift against the UniForm table.
Now you’re ready to use Delta Lake UniForm on Amazon EMR.
Enable Delta Lake UniForm
Start by creating a Delta Lake table that contains the customer review dataset. After creating the table, run REORG
query to enable UniForm on the Delta Lake table.
Create a Delta Lake table
Complete the following steps to create a Delta Lake table based on a customer review dataset and review the table metadata.
- Return to the Jupyter Notebook connected to the Amazon EMR on EC2 cluster and run the following cell to add
delta-iceberg.jar
to use UniForm and configure the spark extension.
- Initialize the SparkSession. The following configuration is necessary to use Iceberg through UniForm. Before running the code, replace
<DeltaLakeS3Bucket>
with the name of the S3 bucket for Delta Lake, which you can find on the CloudFormation stack Outputs tab.
- Create a Spark DataFrame from customer reviews.
- Create a Delta Lake table with the customer reviews dataset. This step takes approximately 5 minutes.
- Run
DESCRIBE EXTENDED {DB_TBL}
in the next cell to review the table. The output includes the table schema, location, table properties, and so on.
The Delta Lake table creation is complete. Next, enable UniForm on this Delta Lake table.
Run REORG
query to enable UniForm
To allow an Iceberg client to access the Delta Lake table you created, enable UniForm on the table. You can also create a new Delta Lake table with UniForm enabled. For more information, see Appendix 1 at the end of this post. To enable UniForm and review the table metadata, complete the following steps.
- Run the following query to enable UniForm on the Delta Lake table. To enable UniForm on an existing Delta Lake table, you run
REORG
query against the table.
- Run
DESCRIBE EXTENDED {DB_TBL}
in the next cell to review the table metadata and compare it from before and after enabling UniForm. The new properties, such asdelta.enableIcebergCompatV2=true
anddelta.universalFormat.enabledFormats=iceberg
, are added to the table properties.
- Run
aws s3 ls s3://<DeltaLakeS3Bucket>/warehouse/ --recursive
to confirm if the Iceberg table metadata is created. Replace<DeltaLakeS3Bucket>
with the S3 bucket from the CloudFormation Outputs tab. The following screenshot shows the command output of table metadata and data files. You can confirm that Delta Lake UniForm generates both Iceberg metadata and Delta Lake metadata files as indicated by the red rectangles below.
- Before querying the Delta Lake UniForm table from an Iceberg client, run the following analytic query for the Delta Lake UniForm table from Amazon EMR on EC2 side, and review the reviews count by each product category.
- The query result shows the output of the reviews count by
product_category
:
Enabling UniForm on the Delta Lake table is complete, and now you can query the Delta Lake table from an Iceberg client. Next, you query the Delta Lake table as an Iceberg table from Amazon Redshift.
Run queries against the UniForm table from Amazon Redshift
In the previous section, you enabled UniForm on your existing Delta Lake table. This allows you to run queries on a Delta Lake table as if it were an Iceberg table from Amazon Redshift. In this section, you run an analytic query on the UniForm table using Amazon Redshift Serverless and add records with a new product category to the UniForm table through the Jupyter Notebook connected to the Amazon EMR on EC2 cluster. Then, you verify the added records with another analytic query from Amazon Redshift. You can confirm that Delta Lake UniForm enables Amazon Redshift to query the Delta Lake table through this section.
Query the UniForm table from Amazon Redshift Serverless
- Open Amazon Redshift Serverless console
- In Namespaces/Workgroups, select the delta-lake-uniform-namespace that you created using the CloudFormation stack.
- Choose Query data on the right top corner to open the Amazon Redshift query editor.
- After opening the editor, select the delta-lake-uniform-workgroup workgroup in the left pane.
- Choose Create connection.
- After you successfully create a connection, you can see the
delta_uniform_db
database andcustomer_review
table you created in the left pane of the editor. - Copy and paste the following analytic query to the editor and choose Run.
- The editor shows the same result of the review count by
product_category
as you obtained from Jupyter Notebook in RunREORG
query to enable UniForm.
Add new product category records into Delta Lake UniForm table from Amazon EMR
Go back to the Jupyter Notebook on Amazon EMR Workspace to add new records with a new product category (Books
) into the Delta Lake UniForm table. After adding the records, query the UniForm table again from Amazon Redshift Serverless.
- On the Jupyter Notebook, go to Add new product category records into the UniForm table and run the following cell to load new records.
- Run the following cell and review the five records with
Books
as the product category. The following screenshot shows the output of this code.
- Add the new reviews with
Books
product category. This takes around 2 minutes.
In the next section, you run a query on the UniForm table from Amazon Redshift Serverless to check if the new records with the Books
product category have been added.
Review the added records in Delta Lake UniForm table from Amazon Redshift Serverless
To check if the result output includes the records of Books
product category:
- On the query editor of Amazon Redshift, run the following query and check if the result output includes the records of
Books
product category.
- The following screenshot shows the output of the query you ran in the previous step. You can confirm the new product category
Books
has been added to the table from Amazon Redshift side.
Now you can query from Amazon Redshift against the Delta Lake table by enabling Delta Lake UniForm.
Clean up resources
To clean up your resources, complete the following steps:
- In the Amazon EMR Workspaces console, choose Actions and then Delete to delete the workspace.
- Choose Delete to delete the Studio.
- In the Amazon EMR on EC2 console, choose Terminate to delete the Amazon EMR on EC2 cluster.
- In the Amazon S3 console, choose Empty to delete all objects in the following S3 buckets.
- The S3 bucket for Amazon EMR Studio such as
aws-emr-studio-<ACCOUNT_ID>-<REGION>-delta-lake-uniform-on-aws
. Replace<ACCOUNT_ID>
and<REGION>
with your account ID and the bucket’s region. - The S3 bucket for Delta Lake tables such as
delta-lake-uniform-on-aws-deltalakes3bucket-abcdefghijk
.
- The S3 bucket for Amazon EMR Studio such as
- After you confirm the two buckets are empty, delete the CloudFormation stack
redshift-serverless-for-delta-lake-uniform
. - After the first CloudFormation stack has been deleted, delete the CloudFormation stack
delta-lake-uniform-on-aws
.
Conclusion
Delta Lake UniForm on AWS represents an advancement in addressing the challenges of data interoperability and accessibility in modern big data architectures. By enabling Delta Lake tables to be read as Apache Iceberg tables, UniForm expands data access capabilities, allowing organizations to use a broader range of analytics engines and data warehouses such as Amazon Redshift.
The practical implications of this technology are substantial, offering new possibilities for data analysis and insights across diverse platforms. As organizations continue to navigate the complexities of big data, solutions like Delta Lake UniForm that promote interoperability and reduce data silos will become increasingly valuable.
By adopting these advanced open table formats and using cloud platforms such as AWS, organizations can build more robust and efficient data ecosystems. This approach not only enhances the value of existing data assets but also fosters a more agile and adaptable data strategy, ultimately driving innovation and improving decision-making processes in our data-driven world.
Appendix 1: Create a new Delta Lake table with UniForm
You can create a Delta Lake table with UniForm enabled using the following DDL.
Appendix 2: Run queries from Snowflake against the UniForm table
Delta Lake UniForm also allows you to run queries on a Delta Lake table from Snowflake. In this section, you run the same analytic query on the UniForm table using Snowflake as you previously did using Amazon Redshift Serverless in Run queries from Amazon Redshift against the UniForm table. Then you confirm that the query results from Snowflake match the results obtained from the Amazon Redshift Serverless query.
Configure IAM roles for Snowflake to access AWS Glue Data Catalog and Amazon S3
To query the Delta Lake UniForm table in Data Catalog from Snowflake, the following configurations are required.
- IAM roles: Create IAM roles for Snowflake to access Data Catalog and Amazon S3.
- Data Catalog integration with Snowflake: Snowflake provides two catalog options for Iceberg tables such as Using Snowflake as the Iceberg catalog and Using an external catalog such as Data Catalog. In this post, you choose AWS Glue Data Catalog as an external catalog. For information about the catalog options, refer to Iceberg catalog options in the Snowflake public documentation.
- An external volume creation for Amazon S3: To access the UniForm table from Snowflake, an external volume for Amazon S3 needs to be configured. With this configuration, Snowflake can connect the S3 bucket that you created for Iceberg tables. For information about the external volume, refer to Configure an external volume for Iceberg tables.
Create IAM roles for Snowflake to access AWS Glue Data Catalog and Amazon S3
Create the following two IAM roles for Snowflake to access AWS Glue Data Catalog and Amazon S3.
- SnowflakeIcebergGlueCatalogRole: This IAM role is used for Snowflake to access the Delta Lake UniForm table in AWS Glue Data Catalog.
- SnowflakeIcebergS3Role: This IAM role is used for Snowflake to access the table’s underlying data in the S3 bucket.
To configure the IAM roles, complete the following steps:
- Choose Launch stack:
- Enter
snowflake-iceberg
as the stack name and choose Next. - Select I acknowledge that AWS CloudFormation might create IAM resources with custom names.
- Choose Submit.
- After the stack creation is complete, check the CloudFormation Outputs tab. Make a note of the names and ARNs of the two IAM roles, which are used in the following section.
Create an AWS Glue Data Catalog Integration
Create a catalog integration for AWS Glue Data Catalog. For more information about the catalog integration for AWS Glue Data Catalog, refer to Configure a catalog integration for AWS Glue in the Snowflake public documentation. To configure the catalog integration, complete the following steps:
- Access your Snowflake account and open an empty worksheet (query editor).
- Run the following query and create a catalog integration with AWS Glue Data Catalog. Replace
<YOUR_ACCOUNT_ID>
with the IAM role ARN from thesnowflake-iceberg
CloudFormation Ouputs tab, and replace<REGION>
with the region of AWS Glue Data Catalog.
- Retrieve
GLUE_AWS_IAM_USER_ARN
andGLUE_AWS_EXTERNAL_ID
by usingDESCRIBE CATALOG INTEGRATION glue_catalog_integration
in the editor. The output is similar to the following:
- Update the IAM role you created using the CloudFormation stack to enable Snowflake to access AWS Glue Data Catalog using that IAM role. Open Trust Relationships of SnowflakeIcebergGlueCatalogRole on the IAM console, choose Edit and update the trust relationship using the following policy. Replace
<GLUE_AWS_IAM_USER_ARN>
and<GLUE_AWS_EXTERNAL_ID>
with the names you obtained in the previous step.
You completed setting up the IAM role for Snowflake to access your Data Catalog resources. Next, configure the IAM role for Amazon S3 access.
Register Amazon S3 as an external volume
In this section, you configure an external volume for Amazon S3. Snowflake accesses the UniForm table data files in S3 through the external volume. For the configuration of an external volume for S3, refer to Configure an external volume for Amazon S3 in Snowflake public documentation. To configure the external volume, complete the following steps:
- In the query editor, run the following query to create an external volume for the Delta Lake S3 bucket. Replace
<DeltaLakeS3Bucket>
with the name of the S3 bucket that you created in Launch a CloudFormation template to configure basic resources from the CloudFormation Outputs tab. Replace<ACCOUNT_ID>
with your AWS account ID.
- Retrieve
STORAGE_AWS_IAM_USER_ARN
andSTORAGE_AWS_EXTERNAL_ID
by runningDESCRIBE EXTERNAL VOLUME delta_lake_uniform_s3
in the editor. The output is similar to the following:
- Update the IAM role you created using the CloudFormation template (in Create IAM roles for Snowflake to access AWS Glue Data Catalog and Amazon S3) to enable Snowflake to use this IAM role. Open Trust Relationships of SnowflakeIcebergS3Role on the IAM console, choose Edit, and update the trust relationship with the following policy. Replace
<STORAGE_AWS_IAM_USER_ARN>
and<STORAGE_AWS_EXTERNAL_ID>
with the values from the previous step.
The next step is to create an Iceberg table to run queries from Snowflake.
Create an Iceberg table in Snowflake
In this section, you create an Iceberg table in Snowflake. The table is an entry point for Snowflake to access the Delta Lake UniForm table in AWS Glue Data Catalog. To create the table, complete the following steps:
- (Optional) If you don’t have a database in Snowflake, run
CREATE DATABASE <DATABASE_NAME>
, replacing<DATABASE_NAME>
with a unique database name for the Iceberg table. - Run the following query in the Snowflake query editor. In this case, the database
delta_uniform_snow_db
is chosen for the table. Configure the following parameters:EXTERNAL_VOLUME
: created byCREATE OR REPLACE EXTERNAL VOLUME
query in the previous section, such asdelta_lake_uniform_s3
.CATALOG
: created by theCREATE CATALOG INTEGRATION
query in the previous section, such asglue_catalog_integration
.CATALOG_TABLE_NAME
: the name of Delta Lake UniForm table you created in Data Catalog such ascustomer_reviews
.
The complete query is below:
After the table creation is complete, you’re ready to query the UniForm table in AWS Glue Data Catalog from Snowflake.
Query the UniForm table from Snowflake
In this step, you query the UniForm table from Snowflake. Paste and run the following analytic query in the Snowflake query editor.
The query result shows the same output as you saw in Review the added records in Delta Lake UniForm table from Amazon Redshift Serverless section.
Now you can query from Snowflake against the Delta Lake table by enabling Delta Lake UniForm.
About the Authors
Tomohiro Tanaka is a Senior Cloud Support Engineer at Amazon Web Services. He’s passionate about helping customers use Apache Iceberg for their data lakes on AWS. In his free time, he enjoys a coffee break with his colleagues and making coffee at home.
Noritaka Sekiyama is a Principal Big Data Architect on the AWS Glue team. He works based in Tokyo, Japan. He is responsible for building software artifacts to help customers. In his spare time, he enjoys cycling with his road bike.
Post Comment