My Youtube Channel

Please Subscribe

Flag of Nepal

Built in OpenGL

Word Cloud in Python

With masked image

Friday, August 25, 2023

Download older versions of PHP

Wednesday, August 23, 2023

Differences between MSISDN, IMSI and ICCID in Telecom industry?

Let's compare MSISDN, IMSI, and ICCID in terms of their definitions, purposes, formats, and usage:


1. **MSISDN (Mobile Station International Subscriber Directory Number):**

   - **Definition:** MSISDN is a unique number that identifies a specific mobile subscriber in a telecommunication network. It's the actual phone number used to call or send messages to a mobile device.

   - **Purpose:** MSISDN is used for routing calls and messages to the correct mobile subscriber's device.

   - **Format:** The format of an MSISDN varies depending on the country's numbering plan. It typically includes the country code (CC), the National Destination Code (NDC) or Area Code, and the Subscriber Number (SN).

   - **Usage:** MSISDN is the number you dial to reach a person's mobile device. It's essential for voice calls, text messages, and multimedia messaging.


2. **IMSI (International Mobile Subscriber Identity):**

   - **Definition:** IMSI is a unique identifier associated with a mobile subscriber's account on a mobile network. It's used for authentication and identification purposes.

   - **Purpose:** IMSI is primarily used for network authentication, allowing the network to identify and provide services to the correct subscriber.

   - **Format:** IMSI consists of the Mobile Country Code (MCC), Mobile Network Code (MNC), and Mobile Subscriber Identification Number (MSIN).

   - **Usage:** IMSI is used internally by the network for authentication during the subscriber's interaction with the network.


3. **ICCID (Integrated Circuit Card Identifier):**

   - **Definition:** ICCID is a unique identifier assigned to a SIM card. It's used to identify the SIM card itself.

   - **Purpose:** ICCID is used for administrative purposes, such as activating a new SIM card, associating it with a mobile number, and managing SIM card inventory.

   - **Format:** ICCID is typically a numeric code, usually 19 to 20 digits long.

   - **Usage:** ICCID is primarily used by the network and service providers for managing SIM cards and related services.


In summary:


- **MSISDN:** Used for calling and messaging a mobile subscriber, following a country-specific numbering format.

- **IMSI:** Used for network authentication and service provisioning, composed of MCC, MNC, and MSIN.

- **ICCID:** Used to identify the SIM card itself, employed for administrative and management purposes.


These identifiers serve distinct roles within the telecommunications ecosystem and are essential for various aspects of mobile communication and network operation.

Why the packages validity offered by Telecom Operators are generally for 28 days?

 

Telecom packages are often offered in 28-day cycles for a few reasons, although it's worth noting that package durations can vary by region and provider. Here are some common reasons for the 28-day cycle:


1. **Monthly Billing in a Shorter Period**: While a standard calendar month has about 30-31 days, telecom providers often offer packages with a duration of 28 days. This allows them to fit 13 billing cycles (28-day periods) in a year instead of 12, which can result in increased revenue for the company. This model essentially shortens the billing cycle, allowing providers to collect payments more frequently.


2. **Competitive Differentiation**: By offering a 28-day package instead of a monthly package, telecom companies can make their offers seem more frequent and competitive. It creates an impression that customers are getting more for their money, even though the actual amount of service provided may be similar to a monthly package.


3. **Marketing Strategies**: The 28-day cycle can be used as a marketing strategy to make customers feel that they are getting a better deal compared to monthly plans, as it appears as if they are getting an extra service cycle throughout the year.


4. **Usage Pattern Alignment**: Some telecom companies might argue that a 28-day cycle aligns better with users' consumption patterns, as it's closer to four weeks. This could result in customers recharging or renewing their plans at times when they are more likely to need additional services.


5. **Increased Revenue**: Shortening the billing cycle by offering 28-day packages can lead to increased revenue for the telecom company. This is because customers end up paying for an extra month of service over the course of a year.


6. **Flexibility and Customer Retention**: Shorter cycles can also provide customers with more flexibility. If someone's usage patterns or needs change within a shorter timeframe, they might find it easier to switch plans or providers after a 28-day period instead of waiting a full calendar month.


It's important for customers to carefully compare the benefits and costs of different plans, whether they're on a 28-day or a monthly cycle, to ensure they are getting the best value for their specific needs. Keep in mind that package durations can vary based on regional regulations, competition, and specific business strategies of telecom providers.

Tuesday, August 22, 2023

Is DevOps a Job role or a Process ?

DevOps is primarily a set of practices, principles, and cultural philosophies that emphasize collaboration, automation, and integration between development and operations teams. It's not a specific job role, nor is it just a single process. Instead, DevOps represents a holistic approach to software development and deployment that aims to improve the efficiency, speed, and reliability of the entire software development lifecycle.


While DevOps itself is not a role, there are job titles and roles that are closely associated with DevOps practices, including:


1. **DevOps Engineer:** This role focuses on implementing and managing the tools, processes, and infrastructure required for automating and streamlining the software delivery pipeline. DevOps engineers often work on tasks like configuring CI/CD pipelines, managing infrastructure as code, and setting up monitoring and alerting systems.


2. **Site Reliability Engineer (SRE):** SREs combine aspects of software engineering and IT operations to ensure the reliability and performance of large-scale applications and systems. They work on tasks like monitoring, incident response, capacity planning, and system scaling.


3. **Automation Engineer:** Automation engineers specialize in creating scripts, tools, and processes that automate various aspects of development, testing, deployment, and operations. They contribute to making the software delivery pipeline more efficient and less error-prone.


4. **Release Manager:** Release managers oversee the planning, coordination, and execution of software releases. They work to ensure that new features and bug fixes are deployed smoothly and reliably into production environments.


5. **Software Developer/Engineer:** Developers who work in organizations that embrace DevOps practices are often involved in tasks beyond writing code, such as setting up build pipelines, writing automated tests, and participating in discussions about deployment strategies.


6. **Operations Engineer:** While traditional operations roles may be distinct from development, the DevOps culture encourages operations engineers to collaborate closely with developers, share responsibilities, and work on automating deployment and management tasks.


It's important to note that DevOps is not solely about job titles or specific roles. It's a cultural shift and a way of working that encourages cross-functional collaboration, shared responsibility, and a focus on automation and continuous improvement across the entire software development lifecycle.

What is DevOps ? (with Examples)

 DevOps, short for "Development" and "Operations," is a set of practices, principles, and cultural philosophies that aim to improve collaboration and communication between software development teams and IT operations teams. The primary goal of DevOps is to streamline and accelerate the software development and deployment process while maintaining a high level of reliability and stability.


In traditional software development approaches, development and operations were often treated as separate silos with distinct responsibilities. Developers focused on writing code and adding new features, while operations teams were responsible for deploying and maintaining the software in production environments. This separation could lead to challenges such as slow release cycles, inconsistencies between development and production environments, and difficulty in identifying and resolving issues.


DevOps seeks to address these challenges by promoting:


1. **Collaboration:** DevOps encourages close collaboration between development and operations teams, breaking down the traditional barriers between them. This collaboration helps in sharing knowledge, identifying potential issues early, and making informed decisions.


2. **Automation:** Automation plays a central role in DevOps practices. By automating tasks like code integration, testing, deployment, and infrastructure provisioning, teams can reduce human error, increase efficiency, and achieve faster and more reliable releases.


3. **Continuous Integration (CI) and Continuous Deployment (CD):** CI/CD practices involve integrating code changes frequently and automatically into a shared repository. This is followed by automated testing and deployment processes that aim to deliver new features and bug fixes to production environments quickly and safely.


4. **Infrastructure as Code (IaC):** IaC is the practice of managing and provisioning infrastructure using code and automation tools. This enables teams to treat infrastructure configuration as code, making it versionable, repeatable, and easily reproducible.


5. **Monitoring and Feedback:** DevOps emphasizes the importance of monitoring applications and infrastructure in real-time. Feedback loops based on monitoring data help identify performance issues, bottlenecks, and other problems, allowing teams to react promptly and continuously improve their systems.


6. **Cultural Shift:** Beyond processes and tools, DevOps encourages a cultural shift that emphasizes collaboration, shared responsibility, and a willingness to learn and adapt. This culture promotes a sense of ownership and accountability among team members.


7. **Microservices and Containerization:** DevOps often aligns well with the use of microservices architecture and containerization technologies like Docker and Kubernetes. These technologies enable teams to build, deploy, and manage applications in a modular and scalable manner.


Overall, DevOps aims to create a smoother, more efficient software development lifecycle that can respond to changing requirements and market demands effectively while ensuring the stability and reliability of the software in production environments.


Examples:

**Example 1: Continuous Integration and Continuous Deployment (CI/CD)**


Imagine a software development team working on a web application. They follow DevOps practices for CI/CD:


1. **Continuous Integration:** Developers regularly push their code changes to a shared repository, such as Git. An automated build process triggers whenever new code is pushed. This build process compiles the code, runs automated tests, and checks for any integration issues.


2. **Continuous Deployment:** After passing the tests and checks in the continuous integration phase, the code is automatically deployed to a staging environment. This environment closely resembles the production environment but is used for final testing before the actual release.


3. **Automated Testing:** Automated tests ensure that new code changes don't introduce bugs or regressions. This includes unit tests, integration tests, and even user interface tests.


4. **Feedback Loop:** If any tests fail, the development team is notified immediately. They can then fix the issues and repeat the process until the tests pass.


5. **Release to Production:** Once the code passes all tests in the staging environment, it can be automatically deployed to the production environment using the same automated deployment process.


**Example 2: Infrastructure as Code (IaC)**


Consider a team responsible for managing the infrastructure of an e-commerce website. They utilize Infrastructure as Code principles:


1. **Versioned Infrastructure:** The team defines the infrastructure components, such as servers, databases, and networking, using code (e.g., using tools like Terraform or CloudFormation). This code is versioned and stored in a repository.


2. **Automated Provisioning:** Whenever there's a need to create a new environment (e.g., development, staging, production), the team runs the IaC code. This automatically provisions the required infrastructure with consistent configurations.


3. **Scalability:** If the website experiences increased traffic, the team can adjust the infrastructure code to add more servers or resources. The change is then applied automatically, ensuring scalability.


4. **Consistency:** Since infrastructure is managed as code, there's less chance of inconsistencies between different environments, reducing the risk of issues arising due to configuration differences.


**Example 3: Collaboration and Cultural Shift**


In a DevOps-oriented organization, developers and operations teams collaborate closely:


1. **Shared Responsibility:** Developers are not only responsible for writing code but also for considering how their code will be deployed and maintained. Operations teams provide insights into deployment and operational concerns early in the development process.


2. **Cross-Functional Teams:** Development and operations team members may work together on projects from the beginning, ensuring that operational considerations are part of the design and development discussions.


3. **Learning and Improvement:** If an issue arises in production, instead of assigning blame, the teams work together to diagnose and resolve the issue. This approach encourages a culture of continuous learning and improvement.


4. **Automation Sharing:** Developers and operations teams collaborate to create automation scripts and tools that benefit both sides. For instance, developers might contribute to automating deployment processes, and operations teams might contribute to monitoring and alerting setups.


These examples showcase how DevOps practices bridge the gap between development and operations, enabling faster, more reliable software delivery while maintaining a focus on stability and quality.

Difference between Hashing and Encryption in Computer Security (with Examples)

 Hashing and encryption are both cryptographic techniques used to protect data, but they serve different purposes and have distinct characteristics. Here's a breakdown of the key differences between hashing and encryption:


1. **Purpose**:

   - **Hashing**: Hashing is primarily used for data integrity and verification. It takes input data (often of variable length) and produces a fixed-size string of characters, known as a hash value or hash digest. The main goal is to quickly verify whether the original data has been altered or tampered with. Hash functions are one-way, meaning you can't reverse the process to retrieve the original data.

   

   - **Encryption**: Encryption is used to protect data confidentiality. It transforms plaintext data into ciphertext using an algorithm and an encryption key. The main objective is to ensure that unauthorized parties cannot read the original data without the decryption key. Encryption is a reversible process, meaning you can decrypt the ciphertext back into the original plaintext with the correct key.


2. **Reversibility**:

   - **Hashing**: Hashing is a one-way process. Once data is hashed, it cannot be reversed to obtain the original data. Hash functions are designed to be irreversible, making them suitable for tasks like password storage or checksum verification.

   

   - **Encryption**: Encryption is a reversible process. Ciphertext can be decrypted back to its original plaintext using the appropriate decryption key. Encryption is commonly used for securing communication, storage, and data transmission.


3. **Output Length**:

   - **Hashing**: Hashing algorithms produce fixed-length hash values, regardless of the length of the input data. For example, a common hashing algorithm like SHA-256 always produces a 256-bit hash value.

   

   - **Encryption**: Encryption algorithms produce ciphertext that can be of varying lengths, depending on the algorithm and the input data length. The length of the ciphertext is often related to the length of the original plaintext.


4. **Key Usage**:

   - **Hashing**: Hashing typically doesn't involve the use of keys. Hash functions take input data and produce hash values. There's no key required for hashing.

   

   - **Encryption**: Encryption involves the use of encryption and decryption keys. The encryption key is used to transform plaintext into ciphertext, and the decryption key is used to reverse the process and retrieve the original plaintext.


5. **Use Cases**:

   - **Hashing**: Hashing is used for tasks like password storage (hashing passwords before storing them in databases), digital signatures (ensuring data integrity in digital communication), and data verification (checksums for files).

   

   - **Encryption**: Encryption is used for securing sensitive data during transmission (SSL/TLS for web traffic), protecting data at rest (encrypted hard drives), and ensuring confidentiality in various applications.


In summary, hashing is primarily used for data integrity verification and is irreversible, while encryption focuses on data confidentiality and is a reversible process. Both techniques are essential components of modern cryptography and have distinct applications in securing digital information.


Examples

**Hashing Example**:


Imagine you're a website administrator and you want to store user passwords securely. Instead of storing the actual passwords in your database, you decide to hash them. You use the SHA-256 hashing algorithm, which produces a fixed 256-bit hash value.


User's Password: "mySecurePassword123"


SHA-256 Hash: 

```

4c6a57e94203f67b50f17b0368c74d81ebe03c5e5d95e21d2ef804ec7a96b2e7

```


When a user creates an account or changes their password, you hash their password using SHA-256 and store the hash in the database. When the user tries to log in, you hash the entered password and compare it to the stored hash. If the hashes match, the password is correct, and you grant access.


**Encryption Example**:


Let's say you're sending sensitive information over the internet, such as credit card details, and you want to ensure that this data is secure during transmission. You decide to encrypt the data using the AES (Advanced Encryption Standard) algorithm.


Plaintext (Original Data): "Credit Card Number: 1234-5678-9012-3456"


Encryption Key: "secretpassword123"


After applying AES encryption, the data might look like:

```

c8b3290d1d388ec2e6f10b4669fc7f00

```


You transmit this encrypted data over the internet. Only the intended recipient, who has the decryption key ("secretpassword123"), can decrypt the data and obtain the original credit card number.


In summary, hashing is used for data integrity verification and produces irreversible hash values, while encryption is used to protect data confidentiality and can be reversed with the appropriate decryption key.

Physical IP Vs Virtual IP Vs SCAN IP (with Examples)

Physical IP, Virtual IP, and SCAN IP are terms often used in the context of networking and IT infrastructure. Let's break down the differences between these concepts:


1. Physical IP (Internet Protocol):

A physical IP address is a unique numerical label assigned to each device (like computers, routers, servers, etc.) connected to a network. It serves as an identifier that helps in routing data packets to the correct destination. Physical IP addresses are associated with the hardware of the device and are typically static, meaning they don't change frequently.


2. Virtual IP (VIP):

A virtual IP address is an IP address that is not associated with a specific physical device, but rather with a service or a group of devices that provide redundancy or load balancing. Virtual IPs are often used to ensure high availability and fault tolerance in server clusters. When a client requests a service, the virtual IP redirects the request to one of the available physical servers in the cluster, helping to distribute the workload evenly and providing redundancy in case one server fails.


3. SCAN IP (Single Client Access Name):

SCAN IP is a concept used in Oracle Real Application Clusters (RAC), which is a technology that allows multiple servers to work together as a single system to provide high availability and scalability for databases. SCAN IP provides a single DNS entry for clients to connect to the database cluster. This single DNS entry resolves to multiple IP addresses (usually three) that are associated with different nodes in the RAC cluster. This helps distribute the database client connections across the nodes and simplifies connection management.


In summary:

- Physical IP addresses are unique identifiers assigned to individual devices on a network.

- Virtual IP addresses are used for load balancing and high availability, directing client requests to a group of devices.

- SCAN IP is specific to Oracle RAC, providing a single DNS entry that resolves to multiple IP addresses for load distribution and easier client connection management to the database cluster.



Examples to make it more clear:


1. **Physical IP Address**:

Imagine you have a small office network with three computers: Computer A, Computer B, and Computer C. Each of these computers has a physical IP address assigned to it.


- Computer A: IP Address - 192.168.1.2

- Computer B: IP Address - 192.168.1.3

- Computer C: IP Address - 192.168.1.4


These IP addresses uniquely identify each computer on the network. When data packets need to be sent from one computer to another, they use these IP addresses to ensure the packets reach the correct destination.


2. **Virtual IP Address (VIP)**:

Let's say you have a web application that runs on a cluster of servers to handle incoming user requests. To ensure that the workload is distributed evenly and to provide fault tolerance, you set up a virtual IP address for the cluster. This IP address isn't tied to any specific physical server but rather represents the entire cluster.


- Virtual IP Address: 10.0.0.100


You have three physical servers in your cluster:


- Server 1: IP Address - 10.0.0.101

- Server 2: IP Address - 10.0.0.102

- Server 3: IP Address - 10.0.0.103


When a user tries to access your web application using the virtual IP address (10.0.0.100), the load balancer associated with that VIP will distribute the incoming request to one of the physical servers (e.g., Server 1). If Server 1 becomes overloaded or experiences issues, the load balancer can redirect traffic to Server 2 or Server 3, ensuring the application remains available.


3. **SCAN IP (Single Client Access Name)**:

Consider a scenario where you're using Oracle Real Application Clusters (RAC) to manage a database that serves a large number of clients. In this setup, you can use SCAN IP to simplify client connections.


You have an Oracle RAC cluster with three nodes:


- Node 1: IP Address - 192.168.1.10

- Node 2: IP Address - 192.168.1.11

- Node 3: IP Address - 192.168.1.12


With SCAN IP, you have a single DNS entry:


- SCAN IP Address: scan.mydatabase.com


When clients want to connect to the Oracle database, they use the SCAN IP address (scan.mydatabase.com). Behind the scenes, the DNS resolution for this SCAN IP results in the three node IP addresses. This simplifies client connection setup and load distribution, as clients don't need to know the individual node addresses.


So, if a client connects to scan.mydatabase.com, the DNS system resolves this to one of the three IP addresses (e.g., 192.168.1.10), enabling the client to communicate with one of the nodes in the Oracle RAC cluster.


In summary, these concepts highlight how IP addressing can be used to manage and optimize network resources, distribute workloads, and simplify client connections in various scenarios.

Monday, August 14, 2023

Free up large space on windows 11

Saturday, August 12, 2023

Must have APP for Eye protection from Computer screen | FLUX

Wednesday, August 9, 2023

Use your LAN connection network to browse websites only accessible with...

Tuesday, August 8, 2023

Best and light screenshot apps for working professionals on windows OS

Monday, August 7, 2023

Things to consider before using Group by query in SQL

Saturday, August 5, 2023

Differences between RPM-based and Debian-based Operating System with examples

 The main difference between RPM-based and Debian-based operating systems lies in the package management systems they use and the package formats they support. These differences influence how software is installed, managed, and updated on each type of system.


**RPM-based OS **:


1. **Package Format**: RPM (RPM Package Manager) is the package format used in RPM-based distributions. RPM packages have the extension `.rpm`. These packages contain the software along with its metadata and instructions for installation.


2. **Package Management System**: RPM-based distributions use package managers like `dnf` (Dandified YUM) or `yum` (Yellowdog Updater Modified) to handle the installation, removal, and management of software packages. These package managers resolve dependencies and download the required RPM packages from software repositories.


3. **Software Repositories**: RPM-based systems typically have centralized software repositories that contain a wide range of software packages. Users can add and enable different repositories to access additional software.


4. **Configuration Files**: RPM-based distributions store system configurations in the `/etc` directory. Configuration files usually have the `.conf` or `.cfg` extension.


**Examples of RPM-based OS**

1. Fedora: A community-driven distribution sponsored by Red Hat, known for its frequent releases and cutting-edge software.

2. CentOS: A community-supported distribution that aims to provide a free and open-source alternative to Red Hat Enterprise Linux (RHEL).

3. Red Hat Enterprise Linux (RHEL): A commercial distribution with long-term support, widely used in enterprise environments.

4. openSUSE: A community-developed distribution that focuses on ease of use and stability, available in two editions: Leap (stable) and Tumbleweed (rolling release).


**Debian-based OS **:


1. **Package Format**: Debian-based distributions use the DEB (Debian Package) format for software packages. DEB packages have the extension `.deb`. Like RPM packages, DEB packages contain software and metadata for installation.


2. **Package Management System**: Debian-based distributions use package managers like `apt` (Advanced Package Tool) or `dpkg` (Debian Package) to handle software installation and management. `apt` is the more user-friendly front-end for `dpkg`, which handles the low-level package operations.


3. **Software Repositories**: Debian-based systems also rely on centralized software repositories, which are typically signed and maintained by the distribution maintainers. Users can add and enable additional repositories as needed.


4. **Configuration Files**: Debian-based distributions store system configurations in the `/etc` directory, similar to RPM-based systems. Configuration files usually have the `.conf` or `.cfg` extension, just like RPM-based distributions.


Both RPM-based and Debian-based operating systems are widely used and have extensive software ecosystems. The choice between the two largely depends on personal preference, specific use case, and familiarity with the distribution's package management system and tools.


**Examples of Debian-based OS:**

1. Debian: One of the oldest and most well-established community-driven distributions, known for its stability and adherence to free software principles.

2. Ubuntu: Based on Debian, Ubuntu is one of the most popular desktop and server distributions, offering a user-friendly experience and long-term support.

3. Linux Mint: A user-friendly distribution built on top of Ubuntu, providing additional features, codecs, and a more polished desktop environment.

4. elementary OS: A visually appealing and beginner-friendly distribution designed for users transitioning from macOS or Windows.


Difference between dnf, yum and apt in linux based operating system

 `dnf`, `yum`, and `apt` are package managers used in different Linux distributions. Each one has its specific features and is associated with different distributions.


1. **dnf (Dandified YUM)**:

   - Used primarily in RPM-based distributions like Fedora, CentOS 8, RHEL 8, and other derivatives.

   - Provides a more modern and improved version of `yum`.

   - Faster and more efficient due to the use of libsolv library for dependency resolution.

   - Has a different command syntax compared to `yum`.

   - Introduced in Fedora 18 and became the default package manager in CentOS 8 and newer.


2. **yum (Yellowdog Updater Modified)**:

   - Used in RPM-based distributions like CentOS 7, RHEL 7, and older versions of Fedora.

   - An older package manager that served as the predecessor to `dnf`.

   - Slower and less efficient compared to `dnf`, especially in large repositories with complex dependencies.

   - Has a different set of commands and options compared to `dnf`.

   - Still available in CentOS 8 but not installed by default.


3. **apt (Advanced Package Tool)**:

   - Used primarily in Debian-based distributions like Debian, Ubuntu, and their derivatives (e.g., Linux Mint, elementary OS).

   - Uses the `.deb` package format instead of RPM.

   - Has a different set of commands and options compared to `dnf` and `yum`.

   - Uses `dpkg` as the underlying low-level package manager.

   - Generally regarded as easy to use and user-friendly.


While all three package managers serve the same purpose of installing, removing, and managing software packages on a Linux system, the main differences lie in the distribution they are associated with, the package formats they handle (RPM for `dnf` and `yum`, and `.deb` for `apt`), and the specific commands and options they offer. The choice of package manager depends on the Linux distribution being used. For instance, if you are using Fedora, CentOS 8, or RHEL 8, `dnf` would be the default and recommended package manager, while for Debian-based systems, `apt` is the standard choice.

Difference between 'dnf' and 'yum' in Centos

`dnf` (Dandified YUM) has become the default package manager in CentOS 8 and newer versions. Both `dnf` and `yum` are package managers used in CentOS and other RPM-based Linux distributions, but there are some differences between the two:


1. **Performance**: `dnf` is generally faster and more efficient than `yum`. It uses the libsolv library for dependency resolution, which is more powerful and faster than the `yum`-based resolver.


2. **Command syntax**: While both `dnf` and `yum` have similar command structures, some commands and options may differ slightly between the two. For example, `dnf` uses "group" instead of "groupinstall" and "module" instead of "module install".


3. **Dependencies and plugins**: `dnf` uses a plugin model that's different from `yum`. Some plugins may be available for one but not the other, or they might have different implementations.


4. **Transaction history**: `dnf` keeps its transaction history in SQLite format, while `yum` uses the simpler "yum history" command.


5. **Default behavior**: In CentOS 8 and later, `dnf` is the default package manager, and `yum` is still available but not installed by default. In earlier versions of CentOS, `yum` was the default.


6. **User experience**: `dnf` provides better feedback during the command execution and generally has more user-friendly output.


Keep in mind that since `dnf` has been adopted as the default package manager, it is recommended to use `dnf` in CentOS 8 and newer versions for compatibility and better performance. If you are using an older CentOS version that still uses `yum`, consider upgrading to a newer release to take advantage of `dnf`.

Various search operation options on Centos

Search for files or directories:

In CentOS 7, you can use various commands and tools to search for files or folders. Here are some common methods:


1. Using the `find` command:

The `find` command is a powerful tool to search for files and directories based on various criteria.

To search for a file named `filename.txt` starting from the root directory (/), open a terminal and run:


find / -name "filename.txt"

Replace `"filename.txt"` with the name of the file you're looking for.


To search for a directory named `dirname`, use the same command:

find / -type d -name "dirname"


2. Using the `locate` command:

The `locate` command utilizes a pre-built database of files for faster searching.

First, make sure the `mlocate` package is installed:

sudo yum install mlocate


Then, update the database:

sudo updatedb


Finally, search for a file or directory:

locate filename.txt

locate dirname

Note that `locate` provides faster results but might not show the most up-to-date information as it depends on the last database update.


3. Using `grep` command (for specific text within files):

If you are looking for files containing specific text, you can use the `grep` command. For example, to search for the word "example" within all files in the current directory and its subdirectories:

grep -r "example" .

The `.` represents the current directory. You can replace it with a specific directory path.


4. Using `whereis` command (for system binaries and manuals):

The `whereis` command is helpful for finding the binary and source files of a command or application.

For example, to find the location of the `ls` command:

whereis ls

These methods should help you search for files and folders efficiently on CentOS 7. Choose the appropriate method based on your requirements.


Search by filename extension:

To search for files by their extension in CentOS 7, you can use the `find` command along with the `-name` option and a wildcard to specify the file extension. Here's how you can do it:

Let's say you want to search for files with the extension `.txt` in the `/home/user/documents` directory:

find /home/user/documents -type f -name "*.txt"


Explanation:

- `find`: The command to search for files and directories.

- `/home/user/documents`: The starting directory for the search. Replace this with the directory where you want to begin the search. If you don't know the directory name or path just use '/' instead of full path like 

find / -type f -name "*.txt"

- `-type f`: Specifies that we are only interested in files (not directories).

- `-name "*.txt"`: The `-name` option allows us to specify a pattern to match filenames. Here, we use the wildcard `*` to match any characters before the `.txt` extension. This way, it will find all files with the `.txt` extension.


You can adjust the file extension and the directory path as needed to search for different file types in different locations. If you want to search for different file extensions, simply change `*.txt` to the desired extension (e.g., `*.pdf`, `*.jpg`, etc.).

Uninstall MariaDB completely along with its dependencies from the Centos

 To uninstall MariaDB on CentOS 8, you can use the `yum` package manager. Follow these steps to uninstall MariaDB:


1. **Stop the MariaDB service**:

   Before uninstalling, it's better to stop the MariaDB service to avoid any issues. Open a terminal and run the following command:

   sudo systemctl stop mariadb


2. **Remove the MariaDB packages**:

   Once the service is stopped, you can proceed to remove the MariaDB packages using `yum`:

   sudo yum remove mariadb-server mariadb-client


3. **Remove data and configuration files (optional)**:

   By default, the package manager may not remove the MariaDB data and configuration files. If you want to remove them as well, run the following commands:

   sudo rm -rf /var/lib/mysql

   sudo rm -rf /etc/my.cnf


   Please be cautious while running the `rm` command, as it permanently deletes the files and directories.


4. **Clean up dependencies (optional)**:

   You can also clean up any unused dependencies to free up disk space:

   sudo yum autoremove


That's it! MariaDB should now be uninstalled from your CentOS 8 system. Before performing these steps, make sure to back up any important databases to prevent data loss.

Install MariaDB on Centos

 To install MariaDB on CentOS 8, follow these steps:


1. Update the system packages:

   Before installing any software, it's a good idea to update your system to ensure you have the latest packages. Open a terminal or SSH into your CentOS 8 server and run the following commands:

   sudo dnf clean all

   sudo dnf update


2. Install MariaDB server:

CentOS 8 uses the DNF package manager, so you can easily install MariaDB by running the following command:

   sudo dnf install mariadb-server


3. Start and enable the MariaDB service:

   After the installation is complete, start the MariaDB service and enable it to start on boot using the following commands:

   sudo systemctl start mariadb

   sudo systemctl enable mariadb


4. Secure the MariaDB installation:

   By default, MariaDB is not configured with a root password, and it is recommended to set a root password for security. You can run the following command to secure your installation:

   sudo mysql_secure_installation

   This command will prompt you to set the root password, remove anonymous users, disallow root login remotely, and remove the test database. You can choose 'Y' or 'N' based on your preferences and requirements.


5. Verify the installation:

   To check if MariaDB is running and to verify its version, you can use the following command:

   sudo systemctl status mariadb

   mysql --version


That's it! MariaDB is now installed and running on your CentOS 8 system. You can interact with the database using the `mysql` command-line client or other tools like phpMyAdmin if you have a web server installed.

Solved: MariaDB failed to start with error message "job for mairadb.service failed because the control process exited with error code"

Here are some steps you can follow to resolve the issue:


1. **Check for Running Processes**: As the logs indicate, another process is already using port 3306. You can verify this by running the following command:

   sudo netstat -tulnp | grep 3306


 The command will show you the process ID (PID) of the process using port 3306. Make a note of the PID.

for example,

[root@Pinrecovery ~]# sudo netstat -tulnp | grep 3306

tcp6       0      0 :::3306                 :::*                    LISTEN      110920/mysqld


Here, the process id is: 110920


2. **Stop the Conflicting Process**: Once you identify the PID of the process using port 3306, you can stop it using the `kill` command. Replace `<PID>` with the actual PID you obtained in the previous step:

   sudo kill <PID>


3. **Start MariaDB**: After stopping the conflicting process, try starting the MariaDB service again:

   sudo systemctl start mariadb


4. **Check SELinux**: If you're still having issues with starting MariaDB, ensure that SELinux is not causing any problems. Temporarily disable SELinux to see if it resolves the issue:

   sudo setenforce 0

   However, keep in mind that disabling SELinux is not recommended for security reasons. If SELinux is causing the issue, you should investigate and configure SELinux policies appropriately.


5. **Verify Configuration**: Double-check your MariaDB configuration files (`/etc/my.cnf` or `/etc/mysql/my.cnf`) for any incorrect settings. Ensure that there are no duplicate configurations or conflicts with other services.


6. **Check Hardware/Software Issues**: If the problem persists, investigate for any potential hardware or software issues on your system that might be affecting MariaDB's ability to start.


After attempting the above steps, try starting the MariaDB service again. If the issue persists, review the error messages carefully to understand the root cause, and if needed, seek further assistance from the MariaDB community or forums.

Solution for "error 1045: access denied for user 'root'@'localhost' (using password: no)"

The error message "1045: Access denied for user 'root'@'localhost' (using password: no)" indicates that you are trying to connect to the MariaDB database server as the 'root' user without providing a password, but the server is expecting one.

Here are some steps to troubleshoot and resolve the issue:

1. Check your password:

   Ensure that you are using the correct password for the 'root' user. By default, MariaDB sets an empty password for the 'root' user during installation. If you have set a password and forgotten it, you might need to reset it.


2. Provide the password in your PHP script:

   If you have set a password for the 'root' user, you need to provide it when connecting to the database using `mysqli`. Update your PHP script to include the correct password:


   <?php

   $servername = "localhost";

   $username = "root";

   $password = "your_root_password"; // Update this with the actual password

   $dbname = "your_database";


   // Create connection

   $conn = new mysqli($servername, $username, $password, $dbname);


   // Check connection

   if ($conn->connect_error) {

       die("Connection failed: " . $conn->connect_error);

   }


   echo "Connected successfully";


   // Close connection

   $conn->close();

   ?>



3. Verify MariaDB service status:

   Make sure the MariaDB service is running on your CentOS 8 system. You can check the status using the following command:

   sudo systemctl status mariadb


   If it's not running, start the service:

   sudo systemctl start mariadb


4. Check MariaDB user privileges:

   It's possible that the 'root' user does not have the necessary privileges to connect from 'localhost'. Log in to the MariaDB server as the root user:

   sudo mysql -u root


   Once logged in, check the user privileges:

   MariaDB [(none)]> SELECT user, host FROM mysql.user;

 Make sure there is an entry for 'root' user with 'localhost' as the host. If it's not there, you can add it:


   MariaDB [(none)]> CREATE USER 'root'@'localhost' IDENTIFIED BY 'your_root_password';

   MariaDB [(none)]> GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION;

   MariaDB [(none)]> FLUSH PRIVILEGES;

 

   Remember to replace `'your_root_password'` with the actual password you want to set.


5. Firewall considerations:

   Ensure that there are no firewall rules blocking the connection to the MariaDB server on localhost.

After performing these steps, try running your PHP script again. It should connect to the MariaDB server without the access denied error.


If you still encounter issues with access denied, here are a few things to check:


1. Verify the MariaDB root password:

   If you are unable to log in as the root user using the correct password, it's possible that the password is incorrect. You can try resetting the root password following these steps:


   - Stop the MariaDB service:

   sudo systemctl stop mariadb

 

   - Start the MariaDB server without permission checks:

   sudo mysqld_safe --skip-grant-tables &


   - Connect to the MariaDB server as the root user:

   mysql -u root

 

   - Update the root user's password:

   USE mysql;

   UPDATE user SET password = PASSWORD('new_password') WHERE User = 'root';

   FLUSH PRIVILEGES;


   Replace `'new_password'` with your desired new password.


   - Exit the MariaDB prompt:

   EXIT;


   - Stop the MariaDB server:

   sudo pkill mysqld_safe


   - Start the MariaDB service again:

   sudo systemctl start mariadb


2. Check for other potential login issues:

   It's possible that there are other issues causing the access denied error. For example, the 'root' user might not be allowed to connect from 'localhost'. Make sure you have the correct host specified in your PHP script (e.g., `'localhost'`).


3. Verify PHP configuration:

   Ensure that you are using the correct PHP configuration (`php.ini`) and that it is loading the `mysqli` extension properly.


Will "mysqli" extension work for MariaDB database?

Yes, `mysqli` can work with MariaDB in PHP. The `mysqli` extension stands for "MySQL Improved" and is designed to work with both MySQL and MariaDB databases. MariaDB is a fork of MySQL, so they share a lot of similarities and are largely compatible with each other.

When you use the `mysqli` extension in PHP, you can connect to both MySQL and MariaDB databases using the same functions and methods. This is because the `mysqli` extension provides an improved and more feature-rich API for accessing MySQL and MariaDB databases in PHP.

Here's a simple example of connecting to a MariaDB database using `mysqli` in PHP:


<?php

$servername = "localhost";

$username = "your_username";

$password = "your_password";

$dbname = "your_database";


// Create connection

$conn = new mysqli($servername, $username, $password, $dbname);


// Check connection

if ($conn->connect_error) {

    die("Connection failed: " . $conn->connect_error);

}


echo "Connected successfully";


// Close connection

$conn->close();

?>


Remember to replace `your_username`, `your_password`, and `your_database` with your actual database credentials.

So, whether you are working with MySQL or MariaDB, you can use `mysqli` in PHP to interact with the database effectively.

Install oci8 on centos 8

 Install oci8 on centos 8

 

Now that you have the necessary tools and libraries installed, you can proceed with the next steps to install the OCI8 extension for PHP on CentOS 8:

 

1. Install the Oracle Instant Client:

   - Download the Oracle Instant Client RPM packages for your architecture from the Oracle website (https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html). You'll need  oracle-instantclient-basic ,  oracle-instantclient-devel and oracle-instantclient-sqlplus packages.

   - Transfer the downloaded RPM packages to your CentOS 8 system if you downloaded them on a different machine.

Note: for centos, it is better to download “.rpm” file rather than “.zip”

2. Install the Oracle Instant Client RPM packages:

Go to the directory where you downloaded the oracle instant-client files and install those files:

Let’s take example for version oracle instant-client 11.2,

sudo dnf install oracle-instantclient11.2-basic-11.2.0.4.0-1.x86_64.rpm  

sudo dnf install oracle-instantclient11.2-devel-11.2.0.4.0-1.x86_64

sudo dnf install oracle-instantclient11.2-sqlplus-11.2.0.4.0-1.x86_64

To verify whether the Oracle Instant Client "devel" package is installed on your CentOS system, you can use the package management tool rpm or dnf. Here's how you can check for the presence of the Oracle Instant Client devel package:

Using ‘rpm’:

rpm -qa | grep oracle-instantclient-devel

Using ‘dnf’:

dnf list installed | grep oracle-instantclient-devel

 

3. Verify the ORACLE_HOME environment variable:

echo $ORACLE_HOME

Ensure that the ORACLE_HOME environment variable is set correctly and points to the location where you installed the Oracle Instant Client. If it's not set correctly, you can set it as follows:

export ORACLE_HOME=/path/to/instant/client

During the installation process, you may be prompted to provide the path to the Oracle Instant Client library. If prompted, enter the correct path:

Enter the path: instantclient,/usr/lib/oracle/19.20/client64/lib

 

 

 

4. Set the environment variables required for OCI8 and PHP:

 

echo 'export ORACLE_HOME=/usr/lib/oracle/19.12/client64' | sudo tee -a /etc/profile.d/oracle.sh

echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/oracle/19.12/client64/lib' | sudo tee -a /etc/profile.d/oracle.sh

sudo ldconfig

Once you are done with above steps, the environment is set for oci8 installation, follow the bellows steps now,

5. Stop Apache and uninstall older version of OCI8 if any (stopping Apache is very important):

 service httpd stop

 pecl uninstall oci8

 

6. Install php-pear and php devel:

 sudo yum install php-pear php-devel

 pear download pecl/oci8

7. The next commands depend on the version of oci8 downloaded above.

$ tar xvzf oci8-2.2.0.tgz

$ cd oci8-2.2.0/

$ phpize

$ export PHP_DTRACE=yes

 

 

8. Make sure of the instantclient path below... mine was version 11.2 so it was located in this folder... Also make note some tutorials ask for the ORACLE_HOME folder which theoretically is /usr/lib/oracle/11.2/client64 but if its instantclient then put the lib folder underneath it (worked for me at least:)

$ ./configure --with-oci8=instantclient,/usr/lib/oracle/12.2/client64/lib/

$ make

$ make install

 

9. NOW an .so file built in: /usr/lib64/php/modules/oci8.so

10. check whether oci8 has been successfully installed or not:

php -m | grep oci8

11. sudo service httpd restart

The steps may not be needed in most of the cases. If indeed, it is required in your case go through these steps as well (though it is suggested to first try running before implementing the below steps):

# THIS STEP NOT NEEDED if SELinux disabled on your server/box, but if SELinux is enabled run: setsebool -P httpd_execmem 1

# NOW add:   extension=oci8.so    at the bottom of your php.ini file (probably in /etc/php.ini)

# Add extension_dir=/usr/lib64/php/modules/

Install Apache, PHP on Centos 8

 

Apache installation on centos 8

To install Apache on CentOS 8, you can use the `dnf` package manager, which is the replacement for `yum` in CentOS 8. Here's a step-by-step guide to installing Apache:

 

1. Open a terminal on your CentOS 8 system.

 

2. Update the package list to ensure you have the latest information about available packages:

sudo dnf update

 

3. Install Apache using the `dnf` package manager:

sudo dnf install httpd

 

4. After the installation is complete, start the Apache service:

sudo systemctl start httpd

 

5. Enable Apache to start on boot:

sudo systemctl enable httpd

 

6. Check the status of Apache to ensure it's running without any issues:

sudo systemctl status httpd

 

7. Adjust the firewall settings to allow incoming HTTP traffic:

sudo firewall-cmd --add-service=http --permanent

sudo firewall-cmd --reload

 

Now, Apache should be installed and running on your CentOS 8 system. You can verify its functionality by opening a web browser and accessing your server's IP address or domain name. You should see the default Apache welcome page if everything is set up correctly.

 

PHP installation on centos 8

 

To install PHP on CentOS 8, you can use the `dnf` package manager. Additionally, you may want to install some commonly used PHP extensions to ensure the proper functioning of PHP-based applications. Here's a step-by-step guide to installing PHP:

 

1. Open a terminal on your CentOS 8 system.

 

2. Update the package list to ensure you have the latest information about available packages:

sudo dnf update

 

3. Install PHP and some commonly used extensions:

sudo dnf install php php-cli php-fpm php-mysqlnd php-pdo php-gd php-xml php-mbstring

 

The packages above include the basic PHP package (`php`), command-line interface (`php-cli`), PHP-FPM (FastCGI Process Manager) for serving PHP through a web server, MySQL support (`php-mysqlnd`), PDO (PHP Data Objects) for database connectivity (`php-pdo`), GD library for image manipulation (`php-gd`), XML support (`php-xml`), and multibyte string support (`php-mbstring`).

 

4. After the installation is complete, start and enable the PHP-FPM service:

sudo systemctl start php-fpm

sudo systemctl enable php-fpm

 

5. Check the status of PHP-FPM to ensure it's running without any issues:

sudo systemctl status php-fpm

 

6. Restart Apache: After making any changes to the Apache or PHP-FPM configuration, restart Apache to apply the changes:

sudo systemctl restart httpd

 

 

Now, PHP is installed and ready to be used on your CentOS 8 system. You can test your PHP installation by creating a PHP file with the following content:

 

<?php

   phpinfo();

?>

 

Save the file as `info.php` in your web server's document root directory (typically `/var/www/html/`):

 

sudo echo "<?php phpinfo(); ?>" > /var/www/html/info.php

 

Then, open a web browser and navigate to `http://your_server_ip/info.php` or `http://your_domain/info.php`. You should see a PHP information page displaying PHP version, configuration settings, and more. Remember to remove this `info.php` file after testing for security reasons.

 

Run/Test website on Smartphone that is hosted on local PC

Thursday, August 3, 2023

Control session in PHP to prevent unauthorized access of pages directly ...

Sunday, July 30, 2023

What is Rainbow table used for Hacking?

 A rainbow table is a type of precomputed lookup table used in password cracking and cryptographic attacks. It is a specialized data structure that enables an attacker to quickly reverse the hash value of a password or other data encrypted with a hash function.


When passwords are stored in a database or transmitted over a network, they are often hashed first. Hashing is a one-way function that converts the password into a fixed-length string of characters. It is designed to be irreversible, meaning that it should be computationally infeasible to derive the original password from the hash.


However, attackers can still attempt to crack passwords by using rainbow tables. Here's how they work:


1. **Generating the Rainbow Table**: To create a rainbow table, an attacker precomputes a large number of hash values for various possible passwords and stores them in a table. This process is computationally intensive and time-consuming, but it needs to be done only once.


2. **Hash Lookup**: When an attacker gets hold of a hashed password from a target system, instead of directly trying to reverse the hash, they can simply look up the hash value in their precomputed rainbow table to find a matching entry.


3. **Recovery**: Once a match is found, the attacker can retrieve the corresponding password from the rainbow table, thus successfully cracking the hashed password.


To protect against rainbow table attacks, security experts recommend using additional measures, such as salting passwords. Salting involves adding a unique random value (the salt) to each password before hashing it. This makes rainbow tables ineffective because attackers would need to create separate rainbow tables for each possible salt value, which is impractical due to the vast number of combinations.


By using strong, salted cryptographic hashing algorithms and enforcing proper password management practices, organizations can enhance the security of their systems and protect against rainbow table attacks.