How to install SAPUI5 Plugins in Eclipse LUNA

Advertisements
Posted in SAP HANA, SAPUI5

SAP BusinessObjects Cloud: Create Story

  • First you have to create model. Please follow my previous blog

http://wp.me/p2oyUE-Ma

  • Create Story
  • Choose

  • Select dataset that we created in last exercise

  • Add Dimensions and set the title of the report

  • Added 3 Dimensions

  • Below result for selected State, City and Date

  • Save the story

  • First we have to save the chart. Click Cancel and Save chart first.

  • Click “Copy to Page 1”

  • Change the page name

  • Change the page name and click OK.

  • Click “Save”

  • Save

  • You can do formatting in many ways.
  • Also, you can share your Story
  • Click on Share link

  • You can share story to specific users or to all users. Click OK.

  • Now try to insert chart

  • Add Measure, Dimension and add filter

  • Copy chart and change chart type and see the result

Posted in Cloud, SAP, SAP BusinessObjects

SAP BusinessObjects Cloud: Create Model using excel sheet

  • Login to SAP BusinessObjects Cloud
  • Use create Model option

     

  • Click on “Import a file form your computer”

  • File selection windows will open
  • Click on “Select Source File”

     

  • Select the file and click “Import”

     

  • Loading data

     

  • File uploaded successfully

     

  • Data uploaded

     

  • Now check all column and set proper Dimension and Measure. Also you can change set proper column name.
  • I my case I change Date column from Dimension to Time. System will generate Hierarchy internally.

     

  • Once all set click on “Create Model”

     

  • Confirm to create Model.

     

  •  

     

  • Now save the model

     

  • Model updated successfully.

Posted in Cloud, SAP, SAP BusinessObjects

SAP Lumira sample

Posted in Lumira, SAP

Big Data with SAP HANA Vora Important Queries using Zeppelin

  • Login to SAP Cloud Application Library using https://cal.sap.com
  • Click on connect and open Zeppelin

  • Login to Zeppelin

  • Create new note

  • Syntax to Create a table

    %vora CREATE TABLE CUSTOMER

    (CUSTOMER_ID string, REGION string, LONGITUDE int, LATITUDE int, CUSTOMER_GROUP

    string, LOCATION string)

    USING com.sap.spark.vora

    OPTIONS

    (tableName “CUSTOMER”, paths “/user/vora/customer_data.csv”)

  • Syntax to select from table

    %vora Select * from CUSTOMER

  • Listing tables and views

    %vora SHOW TABLES using com.sap.spark.vora

  • Loading tables from Vora into Spark

    %vora REGISTER ALL TABLES USING com.sap.spark.vora IGNORING CONFLICTS

  • Appending tables

    %vora APPEND TABLE SALES OPTIONS (paths “/user/vora/sales_2015_data.csv,/user/vora/sales_data.csv”, eagerload “true”)

  • Dropping tables

    %vora DROP TABLE CUSTOMER

  • Creating SQL views

  • Create Dimension View

    %vora CREATE DIMENSION VIEW CUSTOMERDIM

    AS SELECT CUSTOMER_ID, YEAR FROM

    SALES

    USING com.sap.spark.vora

    %vora Select * from CUSTOMERDIM

  • Create Cube view

    %vora CREATE CUBE VIEW SALESCUBE

    AS

    (SELECT * FROM CUSTOMERDIM C

    JOIN

    SALES S

    ON C.CUSTOMER_ID = S.CUSTOMER_ID)

    USING com.sap.spark.vora

    %vora select * from SALESCUBE

  • Check how the view is created

    %vora DESCRIBE TABLE SALES_2014 USING com.sap.spark.vora

  • Creating a table in Vora loading data from parquet format

    %vora CREATE TABLE SALES_P (CUSTOMER_ID string, YEAR string, REVENUE bigint)

    USING com.sap.spark.vora

    OPTIONS(tablename “SALES_P”, paths “/user/vora/sales_p.parquet/*”,format “parquet”)

    %vora select * from SALES_P

  • Create a table in Vora loading data using ORC Files

    %vora CREATE TABLE SALES_O(CUSTOMER_ID string, YEAR string, REVENUE bigint)

    USING com.sap.spark.vora

    OPTIONS (tablename “SALES_O”,paths “/user/vora/sales_O.orc/*”,format “orc”)

    %vora select * from SALES_O


  • Create Hierarchies

    %vora CREATE TABLE OFFICERS (id int, pred int, ord int, rank string)

    USING com.sap.spark.vora

    OPTIONS (

    tableName “OFFICERS”, paths “/user/vora/officers.csv”)

    %vora SELECT * FROM OFFICERS

    %vora CREATE TABLE ADDRESSES (rank string, address string)

    USING com.sap.spark.vora

    OPTIONS (tableName “ADDRESSES”, paths “/user/vora/addresses.csv”)

    %vora SELECT * FROM ADDRESSES

    %vora CREATE VIEW HV AS SELECT * FROM HIERARCHY (

    USING OFFICERS AS child

    JOIN PARENT par ON child.pred = par.id

    SEARCH BY ord ASC

    START WHERE pred=0

    SET node) AS H

    %vora select * from HV

  • Join the ADDRESSES and OFFICERS tables

    %vora SELECT HV.rank, A.address

    FROM HV , ADDRESSES A

    WHERE HV.rank = A.rank

  • Running UDF’s on the Hierarchies
  • Returns the rank of the descendants of the root

    %vora SELECT Children.rank

    FROM HV Children, HV Parents WHERE IS_ROOT(Parents.node) AND

    IS_PARENT(Parents.node, Children.node)

  • Returns the address and the rank for the officers from level 2

    %vora SELECT OFFICERS.rank, ADDRESSES.address

    FROM (SELECT Descendants.rank AS rank FROM HV Parents, HV Descendants

    WHERE IS_DESCENDANT(Descendants.node, Parents.node) AND LEVEL(Parents.node) = 2

    ) OFFICERS,ADDRESSES

    WHERE OFFICERS.rank = ADDRESSES.rank

Posted in SAP HANA, Vora

HIVE: Create table with data from 2 different tables

Scenario: In Cloudera default data base 2 table exist with sample data. Now to populate another table in different database after combining data from 2 tables

Solution:

  • Use below statements to see the tables under default database.

  • Now we have to filter the records from “sample_07” and “sample_08” and insert to another database “adil” and Table “employee100K”
  • Query 1: select * from sample_07 where salary > 100000;
  • Query 2: select * from sample_08 where salary > 100000;
  • Create Database “adil”

  • Now write the query to create new table in “adil” database with same structure after combine the Query 1 and Query 2.

  • Now see the result by writing the select statement

  • Same way we can do using HUE.
  • Login to HUE and use “HIVE”

Posted in Big Data, HIVE

Import MySQL data to HDFS through Sqoop

  • Create Database in MySQL [Database: adil]

  • Create Table in MySQL Database [Table: employees]

  • Insert some data

  • Now try to import data from MySQL by using below command

    sqoop import –connect jdbc:mysql://192.168.1.7:3306/adil –username admin –password *********** –table employees -m 1

  • Successful message showing Retrieved 5 Records.

  • Now check data is imported or not?
  • Employees directory created and data available in part-m-00000 file.
  • Now we will edit that file and see the data by using command script

  • Employees data are imported successfully to HDFS.

Posted in Big Data, Hadoop