SAP Lumira sample

Posted in Lumira, SAP

Big Data with SAP HANA Vora Important Queries using Zeppelin

  • Login to SAP Cloud Application Library using https://cal.sap.com
  • Click on connect and open Zeppelin

  • Login to Zeppelin

  • Create new note

  • Syntax to Create a table

    %vora CREATE TABLE CUSTOMER

    (CUSTOMER_ID string, REGION string, LONGITUDE int, LATITUDE int, CUSTOMER_GROUP

    string, LOCATION string)

    USING com.sap.spark.vora

    OPTIONS

    (tableName “CUSTOMER”, paths “/user/vora/customer_data.csv”)

  • Syntax to select from table

    %vora Select * from CUSTOMER

  • Listing tables and views

    %vora SHOW TABLES using com.sap.spark.vora

  • Loading tables from Vora into Spark

    %vora REGISTER ALL TABLES USING com.sap.spark.vora IGNORING CONFLICTS

  • Appending tables

    %vora APPEND TABLE SALES OPTIONS (paths “/user/vora/sales_2015_data.csv,/user/vora/sales_data.csv”, eagerload “true”)

  • Dropping tables

    %vora DROP TABLE CUSTOMER

  • Creating SQL views

  • Create Dimension View

    %vora CREATE DIMENSION VIEW CUSTOMERDIM

    AS SELECT CUSTOMER_ID, YEAR FROM

    SALES

    USING com.sap.spark.vora

    %vora Select * from CUSTOMERDIM

  • Create Cube view

    %vora CREATE CUBE VIEW SALESCUBE

    AS

    (SELECT * FROM CUSTOMERDIM C

    JOIN

    SALES S

    ON C.CUSTOMER_ID = S.CUSTOMER_ID)

    USING com.sap.spark.vora

    %vora select * from SALESCUBE

  • Check how the view is created

    %vora DESCRIBE TABLE SALES_2014 USING com.sap.spark.vora

  • Creating a table in Vora loading data from parquet format

    %vora CREATE TABLE SALES_P (CUSTOMER_ID string, YEAR string, REVENUE bigint)

    USING com.sap.spark.vora

    OPTIONS(tablename “SALES_P”, paths “/user/vora/sales_p.parquet/*”,format “parquet”)

    %vora select * from SALES_P

  • Create a table in Vora loading data using ORC Files

    %vora CREATE TABLE SALES_O(CUSTOMER_ID string, YEAR string, REVENUE bigint)

    USING com.sap.spark.vora

    OPTIONS (tablename “SALES_O”,paths “/user/vora/sales_O.orc/*”,format “orc”)

    %vora select * from SALES_O


  • Create Hierarchies

    %vora CREATE TABLE OFFICERS (id int, pred int, ord int, rank string)

    USING com.sap.spark.vora

    OPTIONS (

    tableName “OFFICERS”, paths “/user/vora/officers.csv”)

    %vora SELECT * FROM OFFICERS

    %vora CREATE TABLE ADDRESSES (rank string, address string)

    USING com.sap.spark.vora

    OPTIONS (tableName “ADDRESSES”, paths “/user/vora/addresses.csv”)

    %vora SELECT * FROM ADDRESSES

    %vora CREATE VIEW HV AS SELECT * FROM HIERARCHY (

    USING OFFICERS AS child

    JOIN PARENT par ON child.pred = par.id

    SEARCH BY ord ASC

    START WHERE pred=0

    SET node) AS H

    %vora select * from HV

  • Join the ADDRESSES and OFFICERS tables

    %vora SELECT HV.rank, A.address

    FROM HV , ADDRESSES A

    WHERE HV.rank = A.rank

  • Running UDF’s on the Hierarchies
  • Returns the rank of the descendants of the root

    %vora SELECT Children.rank

    FROM HV Children, HV Parents WHERE IS_ROOT(Parents.node) AND

    IS_PARENT(Parents.node, Children.node)

  • Returns the address and the rank for the officers from level 2

    %vora SELECT OFFICERS.rank, ADDRESSES.address

    FROM (SELECT Descendants.rank AS rank FROM HV Parents, HV Descendants

    WHERE IS_DESCENDANT(Descendants.node, Parents.node) AND LEVEL(Parents.node) = 2

    ) OFFICERS,ADDRESSES

    WHERE OFFICERS.rank = ADDRESSES.rank

Posted in SAP HANA, Vora

HIVE: Create table with data from 2 different tables

Scenario: In Cloudera default data base 2 table exist with sample data. Now to populate another table in different database after combining data from 2 tables

Solution:

  • Use below statements to see the tables under default database.

  • Now we have to filter the records from “sample_07” and “sample_08” and insert to another database “adil” and Table “employee100K”
  • Query 1: select * from sample_07 where salary > 100000;
  • Query 2: select * from sample_08 where salary > 100000;
  • Create Database “adil”

  • Now write the query to create new table in “adil” database with same structure after combine the Query 1 and Query 2.

  • Now see the result by writing the select statement

  • Same way we can do using HUE.
  • Login to HUE and use “HIVE”

Posted in Big Data, HIVE

Import MySQL data to HDFS through Sqoop

  • Create Database in MySQL [Database: adil]

  • Create Table in MySQL Database [Table: employees]

  • Insert some data

  • Now try to import data from MySQL by using below command

    sqoop import –connect jdbc:mysql://192.168.1.7:3306/adil –username admin –password *********** –table employees -m 1

  • Successful message showing Retrieved 5 Records.

  • Now check data is imported or not?
  • Employees directory created and data available in part-m-00000 file.
  • Now we will edit that file and see the data by using command script

  • Employees data are imported successfully to HDFS.

Posted in Big Data, Hadoop

How to Install Sqoop

  • Download latest Sqoop connector (ZIP file) from Apache site and Extract

  • Now connect through FTP
  • Create folder “sqoop” under “use/lib”
  • and copy jar files to lib folder of
  • Now move to sqoop folder
  • Copy all files to sqoop folder

Posted in Big Data, Hadoop

SAP Fiori on cloud: Create a New Destination to a Public OData Provider

  • Access the SAP HANA Cloud Platform Cockpit via this link:

    https://account.hanatrial.ondemand.com/cockpit

  • Click on “New Destination”

  • Enter values as below and press “Save”

  • Click on “Check Connection”

  • Connection successful.

Posted in Fiori, SAP HANA

Installation and Configuration of JetBrains IDE’s for mongoDB

  • Download JetBrains IDE

    https://d1opms6zj7jotq.cloudfront.net/idea/ideaIC-15.0.3.exe

  • Run EXE and Install Setup
  • Run “IntelliJ IDEA Community Edition 15.0.3”
  • Make sure “Mongo Plugin” install correctly. If not search and Install.

  • Now Define the path of mongo executable and test.

  • Now add Mongo Server and Press “OK’

  • Now move to Mongo Explorer

  • Select Localhost and connect

  • Explorer will show the databases as below

  • Select any database and click to Mongo Shell

  • Window will show as below

  • To see the database type “db” and click on green arrow. You will get the lists of all databases.

  • Configuration and connection with mongoDB is done successfully.
Posted in Big Data, MongoDB