Page tree
Skip to end of metadata
Go to start of metadata

Jiras

ARCDEVOPS-2 - Getting issue details... STATUS

AWS RDS and Aurora Databases

Connect local Java JPA client to remote RDS EC2 Tunnel 

Amazon official guide https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.Scenarios.html
There are a couple options - two of which involve JSCH or a local SSH tunnel - see https://medium.com/@phoenixrising_93140/connecting-to-remote-mysql-via-manual-ssh-tunnel-from-spring-boot-applications-646dcf240d7f

Test SSH access locally by spinning up an ubuntu container first - https://hub.docker.com/r/rastasheep/ubuntu-sshd/

RDS Tunnel Connection from Java via local SSH tunnel

Create a local SSH tunnel to the bastion EC2 instance in the VPC containing the RDS instance

Note the arbitrary local 3406 analog port to 3306

Local SSH Tunnel in Windows via Putty

TBD 20190924+

Local SSH Tunnel in OSX
obrienlabs:biometric.web.docker mi..n$ ssh -f ubuntu@j*.obrienlabs.cloud -L 127.0.0.1:3406:prod*.us-east-1.rds.amazonaws.com:3306 -N -i ~/.ssh/obrien*.pem 

Raw JDBC example

package systems.obrien.biometric.integration;
import java.sql.Connection;
import java.sql.DriverManager;

public class JDBCTest {
	private void testConnection(String username, String password, String useSSL) {
		Connection connection;
		try {
		      Class.forName("com.mysql.jdbc.Driver");
		            connection = DriverManager.getConnection(
// host.docker.internal instead of 127.0.0.1 if running in a docker container
		            		"jdbc:mysql://127.0.0.1:3406/aSchema?useSSL=" + useSSL, username, password);
		            System.out.println(connection.getMetaData().getUserName());           
		} catch ( Exception e ){ e.printStackTrace(); }	
	}

	public static void main(String[] args) {
		JDBCTest test = new JDBCTest();
		// useSSL=false to avoid Mon Sep 23 20:47:29 EDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
		test.testConnection("aUsername", "aPassword", "true");	
	}}
result
o..s@1...


Spring JPA example

Adjust the tomcat or spring boot configuration environment variables - for consumption by the PropertyConfigurator.


spring.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
   xmlns:util="http://www.springframework.org/schema/util"
   xmlns:context="http://www.springframework.org/schema/context"
   xmlns:tx="http://www.springframework.org/schema/tx" 
   xmlns:p="http://www.springframework.org/schema/p"
   xmlns:jpa="http://www.springframework.org/schema/data/jpa"
   xmlns:aop="http://www.springframework.org/schema/aop"
   xmlns:mvc="http://www.springframework.org/schema/mvc"
   xsi:schemaLocation="http://www.springframework.org/schema/beans 
    http://www.springframework.org/schema/beans/spring-beans.xsd 
    http://www.springframework.org/schema/util 
    http://www.springframework.org/schema/util/spring-util.xsd
     http://www.springframework.org/schema/context 
      http://www.springframework.org/schema/context/spring-context.xsd 
       http://www.springframework.org/schema/tx 
        http://www.springframework.org/schema/tx/spring-tx.xsd 
           http://www.springframework.org/schema/data/jpa
        http://www.springframework.org/schema/data/jpa/spring-jpa.xsd
         http://www.springframework.org/schema/mvc 
         http://www.springframework.org/schema/mvc/spring-mvc.xsd
         http://www.springframework.org/schema/aop 
          http://www.springframework.org/schema/aop/spring-aop.xsd"
   default-lazy-init="true">
    <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
        <property name="driverClassName" value="com.mysql.jdbc.Driver" />
        <property name="url" value="${os.environment.db.url}/${os.environment.db.biometric.schema}" />
        <property name="username" value="${os.environment.db.username}" />
        <property name="password" value="${os.environment.db.password}" />
    </bean>
    
    <!-- context:property-placeholder location="WEB-INF/db.properties"/-->
    <!-- fixes Caused by: java.sql.SQLException: No suitable driver found for ${os.environment.db.url} -->
    <!-- spring 3.1 only --><!-- context:property-placeholder/--> <!-- location="classpath*:META-INF/spring/db.properties"/-->
    <!-- spring 3.2 environment must override db.properties-->
    <bean class="org.springframework.context.support.PropertySourcesPlaceholderConfigurer">
        <property name="ignoreUnresolvablePlaceholders" value="true"/>
        <!-- property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE" /-->
        <property name="ignoreResourceNotFound" value="true" />
        <property name="locations" ref="locationsList">
            <!-- list><value>classpath:db.properties</value></list-->
        </property>
    </bean>
    
    <bean id="locationsDefault" 
        class="org.obrienlabs.gps.util.FileSystemLocationsFactoryBean" p:settings-ref="locationsList" />

    <util:list id="locationsList">
        <value>file://${os.environment.configuration.dir}/${os.environment.ecosystem}</value>
        <value>file://${os.environment.configuration.dir}/${os.environment.ecosystem}/${os.environment.discriminator:}</value>
        <value>file://${os.environment.configuration.dir}/${os.environment.ecosystem}/${os.environment.developer.username:}</value>
        <value>file://${os.environment.configuration.dir}/${os.environment.ecosystem}/${os.environment.discriminator:}/${os.environment.developer.username:}</value>
    </util:list>
        
    <bean id="properties" 
        class="org.obrienlabs.gps.util.FileSystemSourcePropertyPlaceholderConfigurer">
        <!-- property name="s3" ref="awsS3Client" /-->
        <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE" />
        <property name="fileSystemLocations" ref="locationsDefault"/>
        <property name="ignoreResourceNotFound" value="true" />
        <property name="propertiesName" value="biometric.properties" />        
    </bean> 
    
    <util:list id="propertyLocations">
        <value>classpath*:/META-INF/settings.properties</value>
    </util:list>
    <!-- look for persistence.xml in the usual META-INF place -->
    <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
         primary="true"
         p:persistenceUnitName="from">
        <property name="dataSource" ref="dataSource" />
        
        <!-- property name="persistenceXmlLocation" value="META-INF/persistence.xml" /-->
        <!-- The workaround for https://hibernate.onjira.com/browse/HHH-6181 requires a dummy mapping file -->
        <property name="mappingResources">
            <list><value>META-INF/orm.xml</value></list>
        </property>
        <!-- requires an anonymous @PersistenceContext and precludes need for a persistence.xml in Spring 3.1-->
        <property name="packagesToScan" value="org.obrienlabs.gps.business.entity"/>
        <property name="jpaVendorAdapter">
            <bean name="jpaVendorAdapter" class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
                <property name="showSql" value="false" />
                <property name="databasePlatform" value="org.hibernate.dialect.MySQL5InnoDBDialect" />
                <property name="generateDdl" value="false" />
            </bean>
        </property>
        <property name="jpaProperties">
            <props>
                <prop key="hibernate.dialect">org.hibernate.dialect.MySQL5InnoDBDialect</prop>               
                <prop key="hibernate.temp.use_jdbc_metadata_defaults">true</prop>
                <prop key="javax.persistence.validation.mode">NONE</prop>
                <!-- prop key="hibernate.connection.driver_class">com.mysql.jdbc.Driver</prop-->
                <prop key="hibernate.hbm2ddl.auto">none</prop>
                <prop key="hibernate.connection.pool_size">50</prop>
                <prop key="hibernate.max_fetch_depth">3</prop>
                <prop key="hibernate.max_fetch_size">50</prop>
                <prop key="hibernate.max_batch_size">10</prop>                
                <prop key="hibernate.bytecode.use_reflection_optimizer">false</prop>
                <!-- prop key="hibernate.jdbc.batch_size">5</prop>
                <prop key="hibernate.jdbc.batch_versioned_data">true</prop-->
                <prop key="hibernate.jdbc.use_streams_for_binary">true</prop> 
                <!-- prop key="hibernate.hbm2ddl.auto">drop-create</prop-->              
            </props>
        </property>
    </bean>   
    <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" />
    <bean class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor" />
    <context:annotation-config />
    <context:spring-configured />
    <!-- rest annotations -->
    <mvc:annotation-driven />
    <bean id="jpaDialect" class="org.springframework.orm.jpa.vendor.HibernateJpaDialect" />
    <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"
        primary="true"
        p:dataSource-ref="dataSource"
        p:entityManagerFactory-ref="entityManagerFactory" />
       
    <tx:annotation-driven transaction-manager="transactionManager"/>
    <!--  read in DAO's via Repository annotation -->
    <context:component-scan base-package="org.obrienslabs.gps.business.*" use-default-filters="false" >
        <context:include-filter type="annotation" expression="org.springframework.stereotype.Repository" />
    </context:component-scan>
    <!--  Rest controllers -->
    <context:component-scan base-package="org.obrienslabs.gps.business" />
    <!-- in cases where the DAO's are in a separate jar - list them -->
    <bean id="daoFacade" class="org.obrienlabs.gps.business.ApplicationService"/>
    <bean id="jaxbMarshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller">
        <property name="contextPath" value="org.obrienlabs.gps.business.entity"/>
    </bean>
    <!-- bean class="org.obrienlabs.gps.integration.SwaggerConfig" /-->
</beans>


Results

INFO  :      : 2019-09-23 21:11:43,261 [main           ] - HHH000204: Processing PersistenceUnitInfo [
	name: from
	...]
INFO  :      : 2019-09-23 21:11:43,630 [main           ] - HHH000130: Instantiating explicit connection provider: org.hibernate.ejb.connection.InjectedDataSourceConnectionProvider
Mon Sep 23 21:11:43 EDT 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
INFO  :      : 2019-09-23 21:11:44,820 [main           ] - HHH000400: Using dialect: org.hibernate.dialect.MySQL5InnoDBDialect
INFO  :      : 2019-09-23 21:11:44,847 [main           ] - HHH000268: Transaction strategy: org.hibernate.engine.transaction.internal.jdbc.JdbcTransactionFactory
INFO  :      : 2019-09-23 21:11:44,853 [main           ] - HHH000397: Using ASTQueryTranslatorFactory
INFO  :      : 2019-09-23 21:11:45,397 [main           ] - HV000001: Hibernate Validator 5.1.2.Final


In Eclipse Data Source Explorer

RDS Tunnel Connection from Java via JSCH library

TBD http://www.jcraft.com/jsch/examples/

Connecting to MySQL

Install MySQL workbench alone or along with a local copy of MySQL https://dev.mysql.com/downloads/file/?id=474219

Connecting to AWS MySQL via Workbench

Connect to AWS RDS database in a private subnet via tunnel bastion host via MySQL Workbench

Get your db connection url, the bastion ip, bastion user, ssh key.

The bastion does not require port 3306 open for incoming traffic on the SecurityGroup in addition to port 22.

Note 3306 incoming is optional and only required if you are directly on the bastion.


Create a workbench connection by selecting "Standard TCP/IP over SSH" and test the connection through the bastion/jump-host.

Connect to AWS RDS database in a public subnet via MySQL Workbench

Not recommended 

get your connection parameters from RDS

Create a profile in workbench without SSL tunnelling

Run a query against a table - in this case gps/accel/tesla/heartrate data per/second




Connecting to AWS MySQL via Eclipse or Spring Source Toolsuite

Install the database development plugins to eclipse specific to the databases we need.

Download the latest MySQL driver jar - as of 20190915 it is 

https://dev.mysql.com/downloads/connector/j/5.1.html
select the mysql jar "mysql-connectdor-java-5.1.48.jar


test the eclipse connecxtion

Run a query from inside eclipse/SpringSourceToolSuite to the remote AWS RDS MySQL database.

Programmatic Connections to MySQL from JPA 2


Importing and exporting for MySQL in Workbench

Note: AWS will charge around $1.25 as "DataTransfer-Out-Bytes" under RunInstances for EC2 for 5G of data from an RDS instance

If you have a dump file or directory you may import both the database schema and the rows together into an empty database.

If you have only an sql query dump for a particular table - you must create the schema first - manually via DDL or using a JPA session.

Create or verify that a database and schema exist for the import.

Import the full dump


check the tables - 3 million biometric samples to 2015

Truncate one of the tables to import a newer sql result dump - to bring in 2019 data.

import the sql query result

Import the full table to 2019.


verify the new record count at 11 million biometric samples.

Dumping an AWS RDS database using MySQL Workbench

Create either a direct connection if the RDS database is in a public subnet or a tunnel connection via the bastion host if the RDS database is in a private subnet.

It will take about 7 min on a 4G db.t3.medium instance - use a peak of 17% cpu and leave 1100Mb freeable ram

On a new db.t3.small to db.t3.medium reboot with 3 of 4g free - the export needs 1.7g leaving 1164Mb free.

Export completes with default options on Workbench 6.3.10 but not 8.0.18

MySQL Database Service dump using msqldump

Running: /Applications/MySQLWorkbench.app/Contents/MacOS/mysqldump --defaults-file="/var/folders/mk/90_85fd56s1_qmrlx_cp_ds80000gn/T/tmpN0tF8y/extraparams.cnf"  --user=os --host=pr***y.us-east-1.rds.amazonaws.com --protocol=tcp --port=3306 --default-character-set=utf8 --single-transaction=TRUE --no-create-info=TRUE --skip-triggers "biometric"

Generating DDL

show create table biometric.session;
show create table biometric.gps_record;

CREATE TABLE `session` (
  `IDENT_ID` bigint(20) NOT NULL AUTO_INCREMENT,
  `version` bigint(20) DEFAULT NULL,
  `sessionId` bigint(20) DEFAULT NULL,
  `user_IDENT_ID` bigint(20) DEFAULT NULL,
  PRIMARY KEY (`IDENT_ID`),
  KEY `FKA11C0E76D6C0252` (`user_IDENT_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

run in the mysql cli
mysql> CREATE TABLE `session` (
    ->   `IDENT_ID` bigint(20) NOT NULL AUTO_INCREMENT,
    ->   `version` bigint(20) DEFAULT NULL,
    ->   `sessionId` bigint(20) DEFAULT NULL,
    ->   `user_IDENT_ID` bigint(20) DEFAULT NULL,
    ->   PRIMARY KEY (`IDENT_ID`),
    ->   KEY `FKA11C0E76D6C0252` (`user_IDENT_ID`)
    -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1
    -> ;
Query OK, 0 rows affected (0.01 sec)

CREATE TABLE `gps_record` (
  `IDENT_ID` bigint(20) NOT NULL AUTO_INCREMENT,
  `ACCELX` varchar(255) DEFAULT NULL,
  `ACCELY` varchar(255) DEFAULT NULL,
  `ACCELZ` varchar(255) DEFAULT NULL,
  `ACCURACY` decimal(19,10) DEFAULT NULL,
  `ALTITUDE` decimal(19,10) DEFAULT NULL,
  `bearing` int(11) DEFAULT NULL,
  `geohash` varchar(255) DEFAULT NULL,
  `GRAVX` varchar(255) DEFAULT NULL,
  `GRAVY` varchar(255) DEFAULT NULL,
  `GRAVZ` varchar(255) DEFAULT NULL,
  `GYROX` varchar(255) DEFAULT NULL,
  `GYROY` varchar(255) DEFAULT NULL,
  `GYROZ` varchar(255) DEFAULT NULL,
  `HEART1` int(11) DEFAULT NULL,
  `HEART2` int(11) DEFAULT NULL,
  `HRDEV1` varchar(255) DEFAULT NULL,
  `HRDEV2` varchar(255) DEFAULT NULL,
  `humidity` varchar(255) DEFAULT NULL,
  `LATITUDE` decimal(19,10) DEFAULT NULL,
  `light` varchar(255) DEFAULT NULL,
  `LINACCX` varchar(255) DEFAULT NULL,
  `LINACCY` varchar(255) DEFAULT NULL,
  `LINACCZ` varchar(255) DEFAULT NULL,
  `LONGITUDE` decimal(19,10) DEFAULT NULL,
  `PRES` double DEFAULT NULL,
  `provider` varchar(255) DEFAULT NULL,
  `PROX` varchar(255) DEFAULT NULL,
  `RECV_SEQ` bigint(20) DEFAULT NULL,
  `ROTVECX` varchar(255) DEFAULT NULL,
  `ROTVECY` varchar(255) DEFAULT NULL,
  `ROTVECZ` varchar(255) DEFAULT NULL,
  `SEND_SEQ` bigint(20) DEFAULT NULL,
  `speed` double DEFAULT NULL,
  `temp` double DEFAULT NULL,
  `teslaX` varchar(255) DEFAULT NULL,
  `teslaY` varchar(255) DEFAULT NULL,
  `teslaZ` varchar(255) DEFAULT NULL,
  `tsStart` bigint(20) DEFAULT NULL,
  `tsStop` bigint(20) DEFAULT NULL,
  `userId` bigint(20) DEFAULT NULL,
  `version` bigint(20) DEFAULT NULL,
  PRIMARY KEY (`IDENT_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=11935042 DEFAULT CHARSET=latin1


Running Oracle DB

ojdbc7 jar missing from Maven Central

Get the jar from https://www.oracle.com/database/technologies/jdbc-upc-downloads.html and insert it into the .m2 folder or use https://mvnrepository.com/artifact/com.oracle/ojdbc7/12.1.0.2

        <dependency>
            <groupId>com.oracle.jdbc</groupId>                                                                                                   
            <artifactId>ojdbc7</artifactId>    
            <version>12.1.0.2</version>
        </dependency> 

or switch to ojdbc8

Adding the Oracle ojdbc7.jar manually

Add the ojdbc7.jar directly to your .m2 if ever your nexus is n/a

mvn install:install-file -Dfile=reference-nbi/src/main/lib/ojdbc7-12.1.0.2.jar -DgroupId=com.oracle.jdbc -DartifactId=ojdbc7 -Dversion=12.1.0.2 -Dpackaging=jar


Raising Oracle 12 in Docker

No out of the box open source dockerhub or helm chart yet - but I was able to "purchase" a license - hopefully no hidden costs to my dockerhub account.
Accept the license to get to the pull screen for the 3G image

sqlplus sys/Oradoc_db1@ORCLCDB as sysdba

https://hub.docker.com/u/obrienlabs/content/sub-3

#docker pull store/oracle/database-enterprise:12.2.0.1
michaelobrien@biometrics MINGW64 /f/wse_s460i/20200416
$ docker run -d -it -p 32769:1521 --name oracle store/oracle/database-enterprise:12.2.0.1
Unable to find image 'store/oracle/database-enterprise:12.2.0.1' locally
12.2.0.1: Pulling from store/oracle/database-enterprise
b465d9b6e399: Downloading [=============================> ] 1.593GB/2.739GB 

$ docker ps | grep oracle
364b3d6b10aa store/oracle/database-enterprise:12.2.0.1 "/bin/sh -c '/bin/ba…" 16 seconds ago Up 15 seconds (health: starting) 0.0.0.0:32769->1521/tcp, 0.0.0.0:32768->5500/tcp oracle

Connect via sql developer using sys/Oradoc_db1@ORCLCDB as sysdba on localhost:32769

stopping
docker stop oracle

restarting
docker start oracle
6860ed06cb6f        store/oracle/database-enterprise:12.2.0.1   "/bin/sh -c '/bin/ba…"   8 days ago          Up 16 seconds (health: starting)   5500/tcp, 0.0.0.0:31521->1521/tcp   oracle

Linux

Note the issue 

buntu@ip-172-31-81-46:~$ sudo docker run -d -it -p 31521:1521 --name oracle store/oracle/database-enterprise:12.2.0.1
Unable to find image 'store/oracle/database-enterprise:12.2.0.1' locally
12.2.0.1: Pulling from store/oracle/database-enterprise
4ce27fe12c04: Pull complete 
9d3556e8e792: Extracting [==================================================>]    151MB/151MB



Testing Oracle JDBC connection without JPA

package cloud.containerization.reference.nbi;

import java.sql.Connection;
import java.sql.DriverManager;
import java.util.Properties;

// http://wiki.obrienlabs.cloud/display/DEV/Databases#Databases-LocalSSHTunnelinOSX
public class JDBCTest {
	public String testConnection(String username, String password, String useSSL) {
		Connection connection;
		String ret = null;
		String url = "jdbc:oracle:thin:@192.168.0.103:31521:ORCLCDB";
		try {
		      Class.forName("oracle.jdbc.OracleDriver");
		      System.out.println("testing: " + username);
		      connection = DriverManager.getConnection(url, username, password);
		      ret = connection.getMetaData().getUserName();  
		      System.out.println(ret);
		} catch ( Exception e ){ e.printStackTrace(); }	
		return ret;
	}

	public static void main(String[] args) {
		JDBCTest test = new JDBCTest();
		test.testConnection("SYS as SYSDBA", "Oradoc_db1", "true");
	}
}


Raising Oracle 12 in Azure

Raising Oracle 12 in AWS RDS


Running MySQL DB

Options: Public subnet, private subnet

Raising MySQL locally


  497  /usr/local/mysql/bin/mysql mysql
  499  sudo /usr/local/mysql/bin/mysqld_safe --skip-grant-tables
  500  sudo vi /etc/hosts


Raising MySQL in RDS

Raising MySQL in Aurora

Raising MySQL in Docker

After I wrote this section ran into my own docker-compose commit https://github.com/obrienlabs/docker-devops/blob/master/src/docker-compose.yml before I started in earnest with kubernetes/helm.

https://hub.docker.com/_/mysql - there are some syntax errors on the docker hub site - see the 2nd client container changes below.  Docker is like machine language - usually you don't directly work with it - use an orchestrator/scheduler like kubernetes in Kubernetes Developer Guide.

# bring up a mysql 5.7 server
obrienlabs:kubernetes michaelobrien$ docker run --name mysql-dev -e MYSQL_ROOT_PASSWORD=root -d -p 3506:3306 mysql:5.7
a5206bf8894e04c8e9f827a5c1421fafa6c4ae14eb3f8d26a36d45d9c41c0cd7
obrienlabs:kubernetes michaelobrien$ docker ps -a
CONTAINER ID        IMAGE                                    COMMAND                  CREATED             STATUS                     PORTS                               NAMES
a5206bf8894e        mysql:5.7                                "docker-entrypoint.s…"   29 minutes ago      Up 29 minutes              33060/tcp, 0.0.0.0:3506->3306/tcp   mysql-dev

# check the logs
obrienlabs:kubernetes michaelobrien$ docker logs mysql-dev
Initializing database
2019-09-28T00:22:59.043108Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2019-09-28T00:22:59.202687Z 0 [Warning] InnoDB: New log files created, LSN=45790

# check inside the container
obrienlabs:kubernetes michaelobrien$ docker exec -it mysql-dev bash
root@a5206bf8894e:/# ls
bin  boot  dev	docker-entrypoint-initdb.d  entrypoint.sh  etc	home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@a5206bf8894e:/# exit

# run a 2nd mysql server as a client to the first
obrienlabs:kubernetes michaelobrien$ docker run --name mysql-dev2 -it --network="host" --rm mysql:5.7.27 sh -c "mysql -h 127.0.0.1 -P 3506 -u root -p pw"
Unable to find image 'mysql:5.7.27' locally
5.7.27: Pulling from library/mysql
Digest: sha256:f7985e36c668bb862a0e506f4ef9acdd1254cdf690469816f99633898895f7fa
Status: Downloaded newer image for mysql:5.7.27
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 5.7.27 MySQL Community Server (GPL)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> select * from sys.sys_config;
+--------------------------------------+-------+---------------------+--------+
| variable                             | value | set_time            | set_by |
+--------------------------------------+-------+---------------------+--------+
| diagnostics.allow_i_s_tables         | OFF   | 2019-09-28 00:22:59 | NULL   |
| diagnostics.include_raw              | OFF   | 2019-09-28 00:22:59 | NULL   |
| ps_thread_trx_info.max_length        | 65535 | 2019-09-28 00:22:59 | NULL   |
| statement_performance_analyzer.limit | 100   | 2019-09-28 00:22:59 | NULL   |
| statement_performance_analyzer.view  | NULL  | 2019-09-28 00:22:59 | NULL   |
| statement_truncate_len               | 64    | 2019-09-28 00:22:59 | NULL   |
+--------------------------------------+-------+---------------------+--------+
6 rows in set (0.00 sec)

mysql> exit
Bye

#during migration of 13 million records - total of 55G I/O in 6 min on a 8 core Macbook pro 2019
docker stats
CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
a5206bf8894e        mysql-dev           49.36%              320.6MiB / 2.933GiB   10.67%              135MB / 6.77MB      0B / 9.45GB         33

ONTAINER ID   NAME        CPU %     MEM USAGE / LIMIT     MEM %     NET I/O         BLOCK I/O         PIDS
ff5967e410f4   mysql-dev  0.03%     486.2MiB / 8.749GiB   5.43%     5.06GB / 13MB   20.5kB / 18.7GB   32

obrienlabs:kubernetes michaelobrien$ docker run -it --network="host" --rm mysql:5.7.27 sh -c "mysql -h 127.0.0.1 -P 3506 -u root -p"
mysql> select count(1) from biometric.gps_record;
+----------+
| count(1) |
+----------+
| 13174445 |
+----------+
1 row in set (4.02 sec)

Import mysql dump into clean docker mysql database

mysql> select count(1) from biometric.gps_record;
ERROR 1146 (42S02): Table 'biometric.gps_record' doesn't exist
mysql> create database biometric;
Query OK, 1 row affected (0.01 sec)

mysql> use biometric;
Database changed
mysql> CREATE TABLE `device` (
    ->   `IDENT_ID` bigint(20) NOT NULL AUTO_INCREMENT,
    ->   `version` bigint(20) DEFAULT NULL,
    ->   `deviceId` bigint(20) DEFAULT NULL,
    ->   `firstAccess` bigint(20) DEFAULT NULL,
    ->   `ip` varchar(255) DEFAULT NULL,
    ->   `lastAccess` bigint(20) DEFAULT NULL,
    ->   `name` varchar(255) DEFAULT NULL,
    ->   `user_IDENT_ID` bigint(20) DEFAULT NULL,
    ->   PRIMARY KEY (`IDENT_ID`),
    ->   KEY `FK77FE1256D6C0252` (`user_IDENT_ID`)
    -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.01 sec)

mysql> CREATE TABLE `gps_user` (
    ->   `IDENT_ID` bigint(20) NOT NULL,
    ->   `FIRSTACCESS` bigint(20) DEFAULT NULL,
    ->   `LASTACCESS` bigint(20) DEFAULT NULL,
    ->   `NAME` varchar(255) DEFAULT NULL,
    ->   `USERID` bigint(20) DEFAULT NULL,
    ->   `VERSION` bigint(20) DEFAULT NULL,
    ->   PRIMARY KEY (`IDENT_ID`)
    -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.01 sec)

mysql> CREATE TABLE `reading` (
    ->   `IDENT_ID` bigint(20) NOT NULL AUTO_INCREMENT,
    ->   `version` bigint(20) DEFAULT NULL,
    ->   `seqNumRec` bigint(20) DEFAULT NULL,
    ->   `seqNumSent` bigint(20) DEFAULT NULL,
    ->   `tsRec` bigint(20) DEFAULT NULL,
    ->   `tsSent` bigint(20) DEFAULT NULL,
    ->   `value` varchar(255) DEFAULT NULL,
    ->   `sensor_IDENT_ID` bigint(20) DEFAULT NULL,
    ->   `session_IDENT_ID` bigint(20) DEFAULT NULL,
    ->   PRIMARY KEY (`IDENT_ID`),
    ->   KEY `FK6B315A6C970CB080` (`session_IDENT_ID`),
    ->   KEY `FK6B315A6C300EF0F2` (`sensor_IDENT_ID`)
    -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.01 sec)

mysql> CREATE TABLE `sensor` (
    ->   `DTYPE` varchar(1) NOT NULL,
    ->   `IDENT_ID` bigint(20) NOT NULL AUTO_INCREMENT,
    ->   `version` bigint(20) DEFAULT NULL,
    ->   `builtIn` tinyint(1) DEFAULT NULL,
    ->   `dynamic` tinyint(1) DEFAULT NULL,
    ->   `label` varchar(255) DEFAULT NULL,
    ->   `manual` tinyint(1) DEFAULT NULL,
    ->   `sensorId` bigint(20) DEFAULT NULL,
    ->   `serial` varchar(255) DEFAULT NULL,
    ->   `type` varchar(255) DEFAULT NULL,
    ->   `unit` varchar(255) DEFAULT NULL,
    ->   PRIMARY KEY (`IDENT_ID`)
    -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.01 sec)

mysql> CREATE TABLE `sequence` (
    ->   `SEQ_NAME` varchar(50) NOT NULL,
    ->   `SEQ_COUNT` decimal(38,0) DEFAULT NULL,
    ->   PRIMARY KEY (`SEQ_NAME`)
    -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.02 sec)

mysql> CREATE TABLE `session` (
    ->   `IDENT_ID` bigint(20) NOT NULL AUTO_INCREMENT,
    ->   `version` bigint(20) DEFAULT NULL,
    ->   `sessionId` bigint(20) DEFAULT NULL,
    ->   `user_IDENT_ID` bigint(20) DEFAULT NULL,
    ->   PRIMARY KEY (`IDENT_ID`),
    ->   KEY `FKA11C0E76D6C0252` (`user_IDENT_ID`)
    -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1
    -> ;
Query OK, 0 rows affected (0.01 sec)

mysql> CREATE TABLE `gps_record` (
    ->   `IDENT_ID` bigint(20) NOT NULL AUTO_INCREMENT,
    ->   `ACCELX` varchar(255) DEFAULT NULL,
    ->   `ACCELY` varchar(255) DEFAULT NULL,
    ->   `ACCELZ` varchar(255) DEFAULT NULL,
    ->   `ACCURACY` decimal(19,10) DEFAULT NULL,
    ->   `ALTITUDE` decimal(19,10) DEFAULT NULL,
    ->   `bearing` int(11) DEFAULT NULL,
    ->   `geohash` varchar(255) DEFAULT NULL,
    ->   `GRAVX` varchar(255) DEFAULT NULL,
    ->   `GRAVY` varchar(255) DEFAULT NULL,
    ->   `GRAVZ` varchar(255) DEFAULT NULL,
    ->   `GYROX` varchar(255) DEFAULT NULL,
    ->   `GYROY` varchar(255) DEFAULT NULL,
    ->   `GYROZ` varchar(255) DEFAULT NULL,
    ->   `HEART1` int(11) DEFAULT NULL,
    ->   `HEART2` int(11) DEFAULT NULL,
    ->   `HRDEV1` varchar(255) DEFAULT NULL,
    ->   `HRDEV2` varchar(255) DEFAULT NULL,
    ->   `humidity` varchar(255) DEFAULT NULL,
    ->   `LATITUDE` decimal(19,10) DEFAULT NULL,
    ->   `light` varchar(255) DEFAULT NULL,
    ->   `LINACCX` varchar(255) DEFAULT NULL,
    ->   `LINACCY` varchar(255) DEFAULT NULL,
    ->   `LINACCZ` varchar(255) DEFAULT NULL,
    ->   `LONGITUDE` decimal(19,10) DEFAULT NULL,
    ->   `PRES` double DEFAULT NULL,
    ->   `provider` varchar(255) DEFAULT NULL,
    ->   `PROX` varchar(255) DEFAULT NULL,
    ->   `RECV_SEQ` bigint(20) DEFAULT NULL,
    ->   `ROTVECX` varchar(255) DEFAULT NULL,
    ->   `ROTVECY` varchar(255) DEFAULT NULL,
    ->   `ROTVECZ` varchar(255) DEFAULT NULL,
    ->   `SEND_SEQ` bigint(20) DEFAULT NULL,
    ->   `speed` double DEFAULT NULL,
    ->   `temp` double DEFAULT NULL,
    ->   `teslaX` varchar(255) DEFAULT NULL,
    ->   `teslaY` varchar(255) DEFAULT NULL,
    ->   `teslaZ` varchar(255) DEFAULT NULL,
    ->   `tsStart` bigint(20) DEFAULT NULL,
    ->   `tsStop` bigint(20) DEFAULT NULL,
    ->   `userId` bigint(20) DEFAULT NULL,
    ->   `version` bigint(20) DEFAULT NULL,
    ->   PRIMARY KEY (`IDENT_ID`)
    -> ) ENGINE=InnoDB AUTO_INCREMENT=11935042 DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.01 sec)

run dump

query

mysql> select * from biometric.gps_record where userId='201904080' order by heart2 DESC LIMIT 1;
+----------+-----------+-----------+-----------+---------------+---------------+---------+--------------+----------+----------+----------+-------+-------+-------+--------+--------+--------+--------+----------+---------------+-------+-----------+-----------+-----------+----------------+------+-----------+------+----------+----------+-----------+----------+----------+-------+------+------------+------------+----------+---------------+---------------+-----------+---------+
| IDENT_ID | ACCELX    | ACCELY    | ACCELZ    | ACCURACY      | ALTITUDE      | bearing | geohash      | GRAVX    | GRAVY    | GRAVZ    | GYROX | GYROY | GYROZ | HEART1 | HEART2 | HRDEV1 | HRDEV2 | humidity | LATITUDE      | light | LINACCX   | LINACCY   | LINACCZ   | LONGITUDE      | PRES | provider  | PROX | RECV_SEQ | ROTVECX  | ROTVECY   | ROTVECZ  | SEND_SEQ | speed | temp | teslaX     | teslaY     | teslaZ   | tsStart       | tsStop        | userId    | version |
+----------+-----------+-----------+-----------+---------------+---------------+---------+--------------+----------+----------+----------+-------+-------+-------+--------+--------+--------+--------+----------+---------------+-------+-----------+-----------+-----------+----------------+------+-----------+------+----------+----------+-----------+----------+----------+-------+------+------------+------------+----------+---------------+---------------+-----------+---------+
| 11271220 | -0.763702 | -1.003540 | -0.230164 | 65.0000000000 | 76.0890050000 |      99 | f244mkh5h13t | 0.000000 | 0.000000 | 0.000000 | NULL  | NULL  | NULL  |   NULL |    109 | NULL   | NULL   | NULL     | 45.4180030000 | NULL  | -0.763702 | -1.003540 | -0.230164 | -75.7011200000 | NULL | 12.100000 | NULL |  3662206 | 0.532636 | -0.462588 | 0.541578 |     2234 |    -1 | NULL | -69.590858 | -20.243172 | 0.503548 | 1554738044948 | 1554738045010 | 201904080 |       0 |
+----------+-----------+-----------+-----------+---------------+---------------+---------+--------------+----------+----------+----------+-------+-------+-------+--------+--------+--------+--------+----------+---------------+-------+-----------+-----------+-----------+----------------+------+-----------+------+----------+----------+-----------+----------+----------+-------+------+------------+------------+----------+---------------+---------------+-----------+---------+
1 row in set (12.66 sec)

mysql> select count(1) from biometric.gps_record;
+----------+
| count(1) |
+----------+
| 11933650 |
+----------+
1 row in set (11.87 sec)

increase core count from 8 to 14
mysql> select count(1) from biometric.gps_record;
+----------+
| count(1) |
+----------+
| 11933650 |
+----------+
1 row in set (5.32 sec)


Raising MySQL in Helm/Kubernetes

Run the default mySQL chart https://github.com/helm/charts/tree/master/stable/mysql in Kubernetes 1.14.6 on RKE 0.28 on Docker 19.03.2 - bootstrap your script via Kubernetes Developer Guide#Quickstart

running on services.obrienlabs.cloud

Infrastructure Experimentation#MySQLHelmChart


ubuntu@ip-172-31-81-46:~$ sudo helm install --name mysqldb-biometric --set mysqlRootPassword=....,mysqlUser=o...,mysqlPassword=...,mysqlDatabase=biometric stable/mysql
NAME:   mysqldb-biometric
LAST DEPLOYED: Mon Sep 30 19:57:38 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Secret
NAME               TYPE    DATA  AGE
mysqldb-biometric  Opaque  2     0s
==> v1/ConfigMap
NAME                    DATA  AGE
mysqldb-biometric-test  1     0s
==> v1/PersistentVolumeClaim
NAME               STATUS   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mysqldb-biometric  Pending  0s
==> v1/Service
NAME               TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)   AGE
mysqldb-biometric  ClusterIP  10.43.79.252  <none>       3306/TCP  0s
==> v1beta1/Deployment
NAME               DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
mysqldb-biometric  1        1        1           0          0s
==> v1/Pod(related)
NAME                                READY  STATUS   RESTARTS  AGE
mysqldb-biometric-844cb75cb4-8dzw2  0/1    Pending  0         0s
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mysqldb-biometric.default.svc.cluster.local
To get your root password run:
    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysqldb-biometric -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)
To connect to your database:
1. Run an Ubuntu pod that you can use as a client:
    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
2. Install the mysql client:
    $ apt-get update && apt-get install mysql-client -y
3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysqldb-biometric -p
To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306
    # Execute the following command to route the connection:
    kubectl port-forward svc/mysqldb-biometric 3306
    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
    
ubuntu@ip-172-31-81-46:~$ kubectl get services --all-namespaces -o wide | grep mysql
default         mysqldb-biometric      ClusterIP   10.43.79.252    <none>        3306/TCP        7m32s   app=mysqldb-biometric
ubuntu@ip-172-31-81-46:~$ kubectl get pods --all-namespaces -o wide | grep mysql
default         mysqldb-biometric-844cb75cb4-8dzw2        0/1     Pending     0          7m40s   <none>         <none>                 <none>           <none>

we need a pv provisioner
ubuntu@ip-172-31-81-46:~$ kubectl describe pods mysqldb-biometric-844cb75cb4-8dzw2
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  45s (x17 over 10m)  default-scheduler  pod has unbound immediate PersistentVolumeClaims

ubuntu@ip-172-31-81-46:~$ helm delete mysqldb-biometric
release "mysqldb-biometric" deleted
ubuntu@ip-172-31-81-46:~$ helm del --purge mysqldb-biometric
release "mysqldb-biometric" deleted

# disabling NFS persistence for now - use the emptydir docker pv - note deleting the pod removes the db under /var/lib/mysql
checkout the charts from git
cd charts/stable/mysql
vi values.yaml
  persistence:
    enabled: false

cd ..
sudo helm package mysql
sudo helm install --name mysqldb-biometric --set mysqlRootPassword=..,mysqlUser=...,mysqlPassword=...,mysqlDatabase=biometric mysql
ubuntu@ip-172-31-81-46:~/charts/stable$ kubectl get pods --all-namespaces -o wide | grep mysql
default         mysqldb-biometric-84b9d8ff7d-9zcjn        0/1     Running     0          18s    10.42.0.18     rke.obrienlabs.cloud   <none>           <none>

ubuntu@ip-172-31-81-46:~/charts/stable$ helm ls
NAME             	REVISION	UPDATED                 	STATUS  	CHART               	APP VERSION	NAMESPACE
difference-nbi   	1       	Mon Jun 10 19:00:59 2019	DEPLOYED	difference-nbi-0.1.0	1.0        	default  
mysqldb-biometric	1       	Mon Sep 30 20:26:21 2019	DEPLOYED	mysql-1.3.2         	5.7.14     	default 

# check startup
ubuntu@ip-172-31-81-46:~/charts/stable$ kubectl logs -f mysqldb-biometric-84b9d8ff7d-t5rrq
Initializing database
Database initialized
MySQL init process in progress...
Warning: Unable to load '/usr/share/zoneinfo/Factory' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/posix/Factory' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/right/Factory' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
mysql: [Warning] Using a password on the command line interface can be insecure.
mysql: [Warning] Using a password on the command line interface can be insecure.
MySQL init process done. Ready for start up.

We can bring up a 2nd mysql pod or docker container to act as a client to the mysql service above - issue is we need to expose the port outside of the kubernetes dns namespace - we do this by adding a nodeport - if we have access to the chart - edit the service - or edit on the fly below

ubuntu@ip-172-31-81-46:~/charts/stable$ docker run -it --network="host" --rm mysql:5.7.27 sh -c "mysql -h 127.0.0.1 -P 3306 -u o... -p"
Unable to find image 'mysql:5.7.27' locally
5.7.27: Pulling from library/mysql
Digest: sha256:f7985e36c668bb862a0e506f4ef9acdd1254cdf690469816f99633898895f7fa
Status: Downloaded newer image for mysql:5.7.27
Enter password: 
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)



# get the service port
ubuntu@ip-172-31-81-46:~/charts/stable$ kubectl get services --all-namespaces -o wide | grep mysql
default         mysqldb-biometric      ClusterIP   10.43.145.247   <none>        3306/TCP        9m24s   app=mysqldb-biometric

# append a nodeport to the existing container service
ubuntu@ip-172-31-81-46:~/charts/stable$ kubectl patch svc mysqldb-biometric -p '{"spec":{"type":"NodePort"}}'
service/mysqldb-biometric patched
ubuntu@ip-172-31-81-46:~/charts/stable$ kubectl get services --all-namespaces -o wide | grep mysql
default         mysqldb-biometric      NodePort    10.43.145.247   <none>        3306:32126/TCP   9m55s   app=mysqldb-biometric

# rerun the client to connect to the mysql kubernetes container
ubuntu@ip-172-31-81-46:~/charts/stable$ docker run -it --network="host" --rm mysql:5.7.27 sh -c "mysql -h 127.0.0.1 -P 32126 -u ... -p"
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 157
Server version: 5.7.14 MySQL Community Server (GPL)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
mysql> 

# check containers
ubuntu@ip-172-31-81-46:~$ docker ps -a | grep mysql
34dfa585a7f9        mysql:5.7                                         "docker-entrypoint.s…"   14 minutes ago      Up 14 minutes               33060/tcp, 0.0.0.0:3506->3306/tcp   mysql-dev
7dc314b8c7d8        4b3b6b994512                                      "docker-entrypoint.s…"   18 minutes ago      Up 18 minutes                                                   k8s_mysqldb-biometric_mysqldb-biometric-84b9d8ff7d-t5rrq_default_695965b3-e3c1-11e9-82f2-1664d802003c_0
4ca7ca170684        758ec7f3a1ee                                      "rm -fr /var/lib/mys…"   18 minutes ago      Exited (0) 18 minutes ago                                       k8s_remove-lost-found_mysqldb-biometric-84b9d8ff7d-t5rrq_default_695965b3-e3c1-11e9-82f2-1664d802003c_0
026187149e1c        rancher/pause:3.1                                 "/pause"                 18 minutes ago      Up 18 minutes                                                   k8s_POD_mysqldb-biometric-84b9d8ff7d-t5rrq_default_695965b3-e3c1-11e9-82f2-1664d802003c_0
ubuntu@ip-172-31-81-46:~$ kubectl get pods --all-namespaces | grep mysql
default         mysqldb-biometric-84b9d8ff7d-t5rrq        1/1     Running     0          19m
ubuntu@ip-172-31-81-46:~$ kubectl get services --all-namespaces | grep mysql
default         mysqldb-biometric      NodePort    10.43.145.247   <none>        3306:32126/TCP   19m

PostgreSQL

https://portworx.com/how-to-backup-and-restore-postgresql-on-kubernetes/

PgAdmin for PostgreSQL

https://www.enterprisedb.com/edb-docs/d/pgadmin-4/reference/online-documentation/4.3/server_deployment.html

https://www.enterprisedb.com/downloads/postgres-postgresql-downloads

PostgreSQL on AWS RDS

Create DB Subnet Group

Create RDS DB


Connect to PostgreSQL via pgAdmin

pgAdmin is now by default a web app at http://127.0.0.1:60776/browser/#

Kubernetes Operators for PostreSQL

https://blog.flant.com/comparing-kubernetes-operators-for-postgresql/

OpenShift#Operators

Raising PostgreSQL in Helm/Kubernetes

Use Helm Charts

git clone https://github.com/helm/charts.git
ubuntu@ip-172-31-81-46:~$ cd charts/stable/postgresql/
ubuntu@ip-172-31-81-46:~/charts/stable/postgresql$ sudo helm repo update
ubuntu@ip-172-31-81-46:~/charts/stable/postgresql$ sudo helm install stable/nfs-server-provisioner --name nfs-serv-prov
A storage class named 'nfs' has now been created
and is available to provision dynamic volumes.

You can use this storageclass by creating a `PersistentVolumeClaim` with the
correct storageClassName attribute. For example:

    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: test-dynamic-volume-claim
    spec:
      storageClassName: "nfs"
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi

ubuntu@ip-172-31-81-46:~/charts/stable/postgresql$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE
default         nfs-serv-prov-nfs-server-provisioner-0    1/1     Running     0          76s
ubuntu@ip-172-31-81-46:~/obrienlabs$ kubectl get services --all-namespaces
NAMESPACE       NAME                                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                  AGE
default         nfs-serv-prov-nfs-server-provisioner   ClusterIP   10.43.53.214    <none>        2049/TCP,20048/TCP,51413/TCP,51413/UDP   46m

ubuntu@ip-172-31-81-46:~/charts/stable/postgresql$ sudo helm list
NAME         	REVISION	UPDATED                 	STATUS  	CHART                       	APP VERSION  	NAMESPACE
nfs-serv-prov	1       	Mon Jun 10 18:16:03 2019	DEPLOYED	nfs-server-provisioner-0.3.0	2.2.1-k8s1.12	default  

Deploy a postgresql chart that uses the provisioner

ubuntu@ip-172-31-81-46:~/charts/stable$ sudo helm package postgresql
Successfully packaged chart and saved it to: /home/ubuntu/charts/stable/postgresql-5.3.5.tgz

PostgreSQL Relational Database

The PostgreSQL version of a relational DB is required for certain services like Confluence or Gerrit.  We can run either a managed RDS instance or a pod in a kuberneted cluster

PostgreSQL on Kubernetes

Default Helm Chart

https://github.com/helm/charts/tree/master/stable/postgresql



In progress
ubuntu@ip-172-31-27-4:~$ sudo helm install --name pgstg stable/postgresql
NAME:   pgstg
LAST DEPLOYED: Sat Apr 27 21:51:58 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Secret
NAME              TYPE    DATA  AGE
pgstg-postgresql  Opaque  1     0s
==> v1/Service
NAME                       TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)   AGE
pgstg-postgresql-headless  ClusterIP  None           <none>       5432/TCP  0s
pgstg-postgresql           ClusterIP  10.43.163.107  <none>       5432/TCP  0s
==> v1beta2/StatefulSet
NAME              DESIRED  CURRENT  AGE
pgstg-postgresql  1        1        0s
==> v1/Pod(related)
NAME                READY  STATUS   RESTARTS  AGE
pgstg-postgresql-0  0/1    Pending  0         0s

NOTES:
** Please be patient while the chart is being deployed **
PostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster:
    pgstg-postgresql.default.svc.cluster.local - Read/Write connection
To get the password for "postgres" run:
    export POSTGRES_PASSWORD=$(kubectl get secret --namespace default pgstg-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
To connect to your database run the following command:
    kubectl run pgstg-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:10.7.0 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host pgstg-postgresql -U postgres

To connect to your database from outside the cluster execute the following commands:
    kubectl port-forward --namespace default svc/pgstg-postgresql 5432:5432 &
    PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres

describe pod shows
  Warning  FailedScheduling  21s (x6 over 3m42s)  default-scheduler  pod has unbound immediate PersistentVolumeClaims
Workaround:
Modify underlying yaml files to use a persistent volume with ReadWriteMany access

K8s only
git clone https://github.com/helm/charts.git
sudo helm install postgresql --name pg

ubuntu@ip-172-31-27-4:~/charts/stable$ helm delete pg
release "pg" deleted
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl get pv --all-namespaces
No resources found.
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl get pvc --all-namespaces
NAMESPACE   NAME                              STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     data-pg-postgresql-0              Pending                                                     4m48s
default     data-pgstg-postgresql-0           Pending                                                     14h
default     data-wishful-skunk-postgresql-0   Pending                                                     13m

ubuntu@ip-172-31-27-4:~/charts/stable$ vi pg-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-serv-prov-nfs-server-provisioner-0
spec:
  capacity:
    storage: 200Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /srv/volumes/nfs-serv-prov-nfs-server-provisioner-0
  claimRef:
    namespace: kube-system
    name: nfs-serv-prov-nfs-server-provisioner-0

ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl apply -f pg-pv.yaml 
persistentvolume/nfs-serv-prov-nfs-server-provisioner-0 created

ubuntu@ip-172-31-27-4:~/charts/stable$ helm delete --purge pg
release "pg" deleted

sudo helm install postgresql --name pg

ubuntu@ip-172-31-27-4:~/charts/stable$ helm delete --purge pg
release "pg" deleted
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl get pvc --all-namespaces
NAMESPACE   NAME                   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     data-pg-postgresql-0   Pending                                                     7m23s
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl delete pvc data-pg-postgresql-0
persistentvolumeclaim "data-pg-postgresql-0" deleted

change storage-class from - to nfs-provisioner
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl get pvc --all-namespaces
NAMESPACE   NAME                   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS      AGE
default     data-pg-postgresql-0   Pending                                      nfs-provisioner   7s

ubuntu@ip-172-31-0-239:~$ history

   48  git clone https://github.com/helm/charts.git
   50  cd charts/
   52  cd stable/
   54  cd postgresql/
   56  cd templates/
   58  vi svc-read.yaml
   59  vi initialization-configmap.yaml
   60  cd ..
   61  vi values.yaml
   62  cd ..
   63  sudo make postgresql
   64  sudo helm install postgresql
   65  kubectl get pods --all-namespaces
   66  kubectl describe pod wishful-skunk-postgresql-0
   67  kubectl describe pod wishful-skunk-postgresql-0 | grep Read
   68  helm install nfs-server-provisioner --name nfs-serv-prov
   69  helm install nfs-client-provisioner/ --name nfs-client-prov
   70  kubectl get pods --all-namespaces
   71  helm delte wishful-skunk-postgresql-0
   72  helm delete wishful-skunk-postgresql-0
   73  helm delete wishful-skunk-postgresql
   74  helm delete postgresql
   75  helm list
   76  helm delete wishful-skunk
   77  sudo helm install postgresql --name pg
   78  kubectl get pods --all-namespaces
   79  vi postgresql/values.yaml
   80  kubectl describe pod pg
   81  vi postgresql/values.yaml
   82  helm delete pg
   83  kubectl get pv --all-namespaces
   84  kubectl get pvc --all-namespaces
   85  kubectl delete pvc data-pg-postgresql-0
   86  kubectl delete pvc data-pgstg-postgresql-0
   87  kubectl delete pvc data-wishful-skunk-postgresql-0
   88  sudo helm install postgresql --name pg
   89  helm list
   90  helm del --purge pg
   91  sudo helm install postgresql --name pg
   92  kubectl get pods --all-namespaces
   93  kubectl describe pod pg
   94  vi pg-pv.yaml
   95  kubectl apply -f pg-pv.yaml
   96  vi pg-pv.yaml
   97  kubectl apply -f pg-pv.yaml
   98  vi pg-pv.yaml
   99  helm list
  100  helm delete --purge nfs-client-prov
  101  helm delete --purge pg
  102  sudo helm install postgresql --name pg
  103  kubectl get pods --all-namespaces
  104  helm delete --purge pg
  105  kubectl get pvc --all-namespaces
  106  kubectl delete pvc data-pg-postgresql-0
  107  vi postgresql/values.yaml
  108  sudo helm install postgresql --name pg
  109  kubectl get pvc --all-namespaces
  110  helm delete --purge pg
  111  kubectl delete pvc data-pg-postgresql-0
  112  ls
  113  vi postgres-configmap.yaml
  114  kubectl create -f postgres-configmap.yaml
  115  ls /mnt
  116  vi postgres-storage.yaml
  117  kubectl create postgres-configmap.yaml
  118  kubectl create -f postgres-configmap.yaml
  119  kubectl create -f postgres-storage.yaml
  120  kubectl get pv --all-namespaces
  121  kubectl get pvc --all-namespaces
  122  vi postgres-deployment.yaml
  123  kubectl create postgres-deployment.yaml
  124  kubectl create -f postgres-deployment.yaml
  125  vi postgres-service.yaml
  126  kubectl create -f postgres-service.yaml
  127  kubectl get svc postgres
  128  psql -h localhost -U postgresadmin1 --password -p 31070 postgresdb
  129  sudo apt install postgresql-client-common
  130  psql -h localhost -U postgresadmin1 --password -p 31070 postgresdb
  131  sudo apt-get install postgresql-client
  132  psql -h localhost -U postgresadmin1 --password -p 30170 postgresdb
  133  psql -h localhost -U postgresadmin --password -p 30170 postgresdb
  135  psql -h localhost -U postgresadmin --password -p 30170 postgresdb
  136  pg_dump -U postgresadmin -W -F t postgresdb
  137  pg_dump -h localhost -U postgresadmin -p 10170 -W -F t postgresdb
  138  pg_dump -h localhost -U postgresadmin -p 3017histo0 -W -F t postgresdb
  142  ls post*
  144  kubectl get pods --all-namespaces
  145  kubectl get pods --all-namespacesexit
  150  cd charts/
  152  cd stable/
  153  cd postgresql/
  155  cd templates/
  157  pg_dump -h localhost -U postgresadmin -p 30170 -W -F t postgresdb


follow
https://severalnines.com/blog/using-kubernetes-deploy-postgresql

ubuntu@ip-172-31-24-171:~/postgresql$ sudo vi postgres-configmap.yaml
ubuntu@ip-172-31-24-171:~/postgresql$ kubectl create -f postgres-configmap.yaml
ubuntu@ip-172-31-24-171:~/postgresql$ sudo vi postgres-configmap.yaml
ubuntu@ip-172-31-24-171:~/postgresql$ kubectl create -f postgres-configmap.yaml
ubuntu@ip-172-31-24-171:~/postgresql$ kubectl get pv --all-namespaces
NAME                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS   REASON   AGE
postgres-pv-volume   5Gi        RWX            Retain           Bound    default/postgres-pv-claim   manual                  71s
ubuntu@ip-172-31-24-171:~/postgresql$ kubectl get pvc --all-namespaces
NAMESPACE   NAME                STATUS   VOLUME               CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     postgres-pv-claim   Bound    postgres-pv-volume   5Gi        RWX            manual         2m21s
ubuntu@ip-172-31-24-171:~/postgresql$ sudo vi postgres-deployment.yaml
ubuntu@ip-172-31-24-171:~/postgresql$ kubectl create -f postgres-deployment.yaml
deployment.extensions/postgres created
ubuntu@ip-172-31-24-171:~/postgresql$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE
default         postgres-78f78bfbfc-nkp7b                 1/1     Running     0          29s
ubuntu@ip-172-31-24-171:~/postgresql$ sudo vi postgres-service.yaml
ubuntu@ip-172-31-24-171:~/postgresql$ kubectl create -f postgres-service.yaml
service/postgres created
ubuntu@ip-172-31-24-171:~/postgresql$ kubectl get service postgres
NAME       TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
postgres   NodePort   10.43.88.54   <none>        5432:30799/TCP   34s

ubuntu@ip-172-31-24-171:~/postgresql$ sudo apt install postgresql-client-common
ubuntu@ip-172-31-24-171:~/postgresql$ sudo apt-get install postgresql-client
ubuntu@ip-172-31-24-171:~/postgresql$ psql -h localhost -U postgresadmin --password -p 30799 postgresdb
Password for user postgresadmin: 
psql (10.8 (Ubuntu 10.8-0ubuntu0.18.04.1), server 10.4 (Debian 10.4-2.pgdg90+1))
postgresdb=# \l
                                 List of databases
    Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
------------+----------+----------+------------+------------+-----------------------
 postgres   | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
 postgresdb | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
 template0  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
            |          |          |            |            | postgres=CTc/postgres
 template1  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
            |          |          |            |            | postgres=CTc/postgres
(4 rows)
postgresdb-# \q
ubuntu@ip-172-31-24-171:~/postgresql$ pg_dump -h localhost -U postgresadmin -p 30799 -W -F t postgresdb > pgdump
Password: 
ubuntu@ip-172-31-24-171:~/postgresql$ ls
pgdump  postgres-configmap.yaml  postgres-deployment.yaml  postgres-service.yaml  postgres-storage.yaml

Scheduled pg_dump Backup Kubernetes Job

ConfigMap for .pgpass file

see https://stackoverflow.com/questions/2893954/how-to-pass-in-password-to-pg-dump


ConfigMap/Job for ssh key copy


Backup

Restore


SQL Server

https://docs.microsoft.com/en-us/sql/integration-services/import-export-data/start-the-sql-server-import-and-export-wizard?view=sql-server-2017










  • No labels

1 Comment

  1. aws rds start-db-cluster --db-cluster-identifier dev --region us-east-1 --no-verify-ssl